How to have optional plugins in serverless framework yml file - aws-lambda

So i use the "serverless framework" for AWS lambdas with a variety of plugins as follow:
plugins:
- serverless-esbuild
- serverless-offline
- serverless-stack-output
- plugin4
- plugin5
- plugin6
- plugin7
- plugin8
- plugin9
- plugin10
I also have multiple 'environments' (for different AWS accounts, configs, etc) so I make the serverless.yml content vary depending on environment using imported sub yml files as follow:
vpc: ${file(serverless/environment/${env:ENVIRONMENT}.yml):vpc}
But what i need is to only make the presence of a single plugin conditionnal on the ENVIRONMENT variable. Let's say plugin8 should not be included if ENVIRONMENT=XXX
With my previous strategy, i could externalise the whole plugin list to the individual environment sub yml files but that would lead to a fair amount of duplicaiton.
Any better approach to just make one line in a yml list conditionnal on an environment variable?
Thanks

Related

Serverless stage environment variables using dotenv (.env)

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

Avoid path redundancy in Gitlab CI include

To improve the structure of my Gitlab CI file I include some specific files, like for example
include:
- '/ci/config/linux.yml'
- '/ci/config/windows.yml'
# ... more includes
To avoid the error-prone redundancy of the path I thought to put it into a variable, like:
variables:
CI_CONFIG_DIR: '/ci/config'
include:
- '${CI_CONFIG_DIR}/linux.yml' # ERROR: "Local file `${CI_CONFIG_DIR}/linux.yml` does not exist!"
- '${CI_CONFIG_DIR}/windows.yml'
# ... more includes
But this does not work. Gitlab CI claims that ${CI_CONFIG_DIR}/linux.yml does not exist, although the documentation says that variables in include paths are allowed, see https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html#gitlab-ciyml-file.
What also didn't work was to include a file /ci/config/main.yml and from that include the specific configurations without paths:
# /ci/config/main.yml
include:
- 'linux.yml' # ERROR: "Local file `linux.yml` does not exist!"
- 'windows.yml'
# ... more includes
How can I make this work or is there an alternative to define the path in only one place without making it too complicated?
This does not seem to be implemented at the moment, and there is an open issue at the moment in the backlog.
Also, with the documentation saying that you could use variables within include sections, those are only for predefined variables.
See if GitLab 14.2 (August 2021) can help:
Use CI/CD variables in include statements in .gitlab-ci.yml
You can now use variables as part of include statements in .gitlab-ci.yml files.
These variables can be instance, group, or project CI/CD variables.
This improvement provides you with more flexibility to define pipelines.
You can copy the same .gitlab-ci.yml file to multiple projects and use variables to alter its behavior.
This allows for less duplication in the .gitlab-ci.yml file and reduces the need for complicated per-project configuration.
See Documentation and Issue.

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

google deployment manager, can you import files in jinja template that you call directly with --template?

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

How to I run molecule testing for a specific or custom platform?

I do know that I can add multiple platforms inside molecule.yml but something I may want to run a test on a new/experimental one, or I may have one platform which is not freely available so I cannot commit changes to the molecule.yml file.
Molecule supports expansion of environment variables inside molecule.yml file. This means that you can use this to override default platforms.
Here is a sample content you could use to enable you overrides:
platforms:
- name: ${MOLECULE_PLATFORM:-centos7}
hostname: ${MOLECULE_PLATFORM:-centos7}
image: ${MOLECULE_IMAGE:-centos:7}
registry:
url: ${MOLECULE_CONTAINER_REGISTRY_URL:-docker.io}
When these variables are not defined, the defaults will be used.

Resources