Serverless stage environment variables using dotenv (.env) - aws-lambda

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?

The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod

In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod

As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

Related

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

AWS Lamda function got deleted. Not able to retrieve

I am using serverless framework in c# to execute queries in athena. AWS Lamda function deleted automatically. When i am trying to deploy it, it's not happening.
sls deploy --stage dev -- To deploy function
sls remove --stage dev -- To remove function
When i tried to redeploy it, it's giving error like below:
As they have mentioned in above screenshot, for more error output i have browsed the link: which shows stack detail. I have attached it below
Refer this image:
[![enter image description here][2]][2]
serverless.yml
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: management-athena
custom:
defaultStage: dev
currentStage: ${opt:stage, self:custom.defaultStage} # 'dev' is default unless overriden by --stage flag
provider:
name: aws
runtime: dotnetcore2.1
stage: ${self:custom.currentStage}
role: arn:aws:iam::***********:role/service-role/nexus_labmda_schema_partition # must validly reference a role defined in your account
timeout: 300
environment: # Service wide environment variables
DATABASE_NAME: ${file(./config/config.${self:custom.currentStage}.json):DATABASE_NAME}
TABLE_NAME: ${file(./config/config.${self:custom.currentStage}.json):TABLE_NAME}
S3_PATH: ${file(./config/config.${self:custom.currentStage}.json):S3_PATH}
MAX_SITE_TO_BE_PROCESSED: ${file(./config/config.${self:custom.currentStage}.json):MAX_SITE_TO_BE_PROCESSED}
package:
artifact: bin/release/netcoreapp2.1/deploy-package.zip
functions:
delete_partition:
handler: CsharpHandlers::AwsAthena.AthenaHandler::DeletePartition
description: Lambda function which runs at specified interval to delete athena partitions
# The `events` block defines how to trigger the AthenaHandler.DeletePartition code
events:
- schedule:
rate: cron(0 8 * * ? *) #triggered every day at 3:00 AM EST.Provided time is in UTC. So 3 A.M EST is 8 A.M UTC
enabled: true
I found out the solution!
Sometimes we won't be able to deploy lamda functions because of many reasons. as #ASR mentioned in comments, there might serverless framework's version issues. But in my case, that didn't solve. Just try deleting the logs group of your function from the cloud watch.
Go to aws -> expand services -> select CloudWatch -> select Logs -> search for your log group select it and delete it. Let's say if your function name is my_function then your log group name will be something like this: aws/lamda/my_function
Then just deploy your lamda function.
I am posting this thinking that it helps someone...! Please correct me if i am wrong.

How to share environment variables across AWS CodeDeploy steps?

I am working on a new deployment strategy that leverages AWS CodeDeploy. The project I work on has many environments (e.g: preproduction, production) and instances (e.g: EMEA, US, APAC).
I have the basic scaffolding working ok but I noticed environment variables set in the BeforeInstall hook can not be retrieved from other steps (for instance, AfterInstall).
Is there a way to share environment variables across AWS CodeDeploy steps?
Content of appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /tmp/code-deploy
hooks:
BeforeInstall:
- location: utils/delivery/aws/CodeDeploy/before_install.sh
timeout: 300
AfterInstall:
- location: utils/delivery/aws/CodeDeploy/after_install.sh
timeout: 300
ApplicationStart:
- location: utils/delivery/aws/CodeDeploy/application_start.sh
timeout: 300
ValidateService:
- location: utils/delivery/aws/CodeDeploy/validate_service.sh
timeout: 300
I set an environment variable in before_install.sh:
export ENVIRONMENT=preprod
And if I reference it in after_install.sh:
$ echo $ENVIRONMENT
$
Nothing.
Thank you for your help on this one!
You could put the export into a temporary file, and then, source that file. So within before_install.sh:
ENVIRONMENT="preprod"
echo "export ENVIRONMENT=\"$ENVIRONMENT\"" > "/path/to/file"
Note: With this method, you are no longer exporting the variable in before_install.sh. You are simply writing a file to be sourced in after_install.sh:
source "/path/to/file"
echo "$ENVIRONMENT"
You should consider setting those variables up in the userdata phase of the instance launch, instead of at deploy time. This allows them to be available to all codedeploy scripts during the life of the instance.
The type of data you describe eg Environment is more associated with the instance itself, and would not normally change during code deployment.
In your Userdata you would set an instance level variable like this:
export ENVIRONMENT="preprod" >> /etc/environment
Another advantage of this approach is that your app itself may want to consult these variables when it launches, to provide environment specific configuration.
If you use Cloudformation, you can set the environment up as a parameter, and pass that on to the user data script. In this way, you can launch the stack and its resources with the appropriate parameters, and launch consistent instances for any environment.

How can/do I configure my Spring app to check a specific tag of configs with custom directory?

Suppose I have the following dir:
root.properties
dev-ci
common.properties
app_dir
app.properties
prod
stage
test
test/stage/prod all follow dev-ci's dir/path structure.
How can I setup my bootstrap.yml file so that when I start my app, it'll load the following:
root.properties
dev-ci/common.properties
dev-ci/app_dir/app.properties
Is there a way to set up my yml so that it takes some parameter from commandline? Or will I have to map out all the possible 'paths' then pass in some label/name?
Lastly, where does the tag name come into play?

Concat variable names in GitLab

We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"

Resources