Serverless Framework - environment variables from file and manual - aws-lambda

I have Serverless Framework function for AWS Lambda. I have secrets in AWS Secrets Manager (SSM) Parameter Store and other environment variables in local .yml files, for separate deployments (dev, stg, prod).
How can I use environment variables from both file and SSM?
Only secrets work:
functions:
kinesisEvents:
handler: kinesis_events_processing.lambda_handler
name: kinesis-events-${self:provider.stage}
package: {}
maximumRetryAttempts: 2
events:
- stream:
type: kinesis
... # omitted a few things
environment:
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}
And only file also works:
functions:
kinesisEvents:
... # as above
environment:
${file(${self:provider.stage}.yml):}
But how can I combine those, so I have all those variables set as env vars in final deployment? I tried this, but it does not work and throws error during deploy:
functions:
kinesisEvents:
... # as above
environment:
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}
${file(${self:provider.stage}.yml):}

I found answers here, here and here. Basically, Serverless Framework has no particular support for this feature. However, it supports extended YAML syntax, which has anchor and dictionary merging capabilities.
So first I unpack the env vars from YAML config file, at the top of the file, and anchor it with &env_vars (like a variable for referencing, but in YAML):
env_vars: &env_vars
${file(${self:provider.stage}.yml):}
functions:
...
And then I use it, unpacking this dictionary:
environment:
<<: *env_vars
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}

Related

how i can add a http api stage in serverless

I am trying to deploy a serverless application to different stages (prod and dev). I want to deploy it to a single API gateway on two different stages
like:-
http://vfdfdf.execute-api.us-west-1.amazonaws.com/dev/
http://vfdfdf.execute-api.us-west-1.amazonaws.com/prod/
I have written a code in serverless -
provider:
name: aws
runtime: nodejs14.x
region: ${self:custom.${self:custom.stage}.lambdaRegion}
httpApi:
id: ${self:custom.${self:custom.stage}.httpAPIID}
stage: ${opt:stage, 'dev'}
Edited to reflect the comments
That can be done during the serverless deployment phase.
I would just have the dev by default in the serverless yml file
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-1
httpApi:
# Attach to an externally created HTTP API via its ID:
id: w6axy3bxdj
# or commented on the very first deployment so serverless creates the HTTP API
custom:
stage: ${opt:stage, self:provider.stage}
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /${self:custom.stage}/hello
method: get
Then, the command:
serverless deploy
deploys in stage dev and region here eu-west-1. It's using the default values.
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/dev/hello
While for production, the default values can be overridden on the command line. Then I would use the command:
serverless deploy --stage prod
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/prod/hello
In my understanding, you do not change the region between dev and prod; but in case you would want to do that. The production deployment could be:
serverless deploy --stage prod --region eu-west-2
to deploy in a different region than the default one from the serverless yml file.

how to include secret.json files in SAM template.yml

In my previous application, I work with a serverless framework but now I want to use sam template.
when I used serverless I include secret.json in one section and used multiple places like this way ${self:custom.secrets.AWS_ID}.
My sample code:
custom:
secrets: ${file(secrets.json)}
tableName: ${self:custom.secrets.DB_TABLE_NAME}
provider:
name: aws
runtime: nodejs12.x
environment:
JWT_SECRET: ${self:custom.secrets.JWT_SECRET}
AWS_ID: ${self:custom.secrets.AWS_ID}
DB_TABLE_NAME: ${self:custom.secrets.DB_TABLE_NAME}
Now my question is, how can I include secret.json and use multiple places in sam template.yml file

Understanding sourcing secrets in kubernetes spring boot app

I am following this guide to consume secrets: https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource.
It says roughly.
save secrets
reference secrets in deployment.yml file
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)
My questions:
If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?
Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?
they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/
You can mount all env variables from secret in the following way:
containers:
- name: app
envFrom:
- secretRef:
name: db-secret
As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:
Reading recursively from secrets mounts
Named after the application (as defined by spring.application.name)
Matching some labels
So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always mounted into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.
So it tries to search secrets using Kubernetes API and match them against application name or application labels.

Is it possible to do parameter substitution in k8s configMaps?

I have configMap that are loading properties files for my spring boot application.
My configMap is mounted as a volume and my springboot app is reading from that volume.
my typical property files are:
application-dev1.yml has
integrations-queue-name=integration-dev1
search-queue-name=searchindex-dev1
application-dev2.yml
integrations-queue-name=integration-dev2
search-queue-name=searchindex-dev1
application-dev3.yml
integrations-queue-name=integration-dev3
search-queue-name=searchindex-dev1
My goal is to have 1 properties file
application-env.yml
integrations-queue-name=integration-{env}
search-queue-name=searchindex-{env}
I want to do parameter substitution of env with the profile that is active for my service.
Is it possible to do parameter substitution in configMaps from my spring boot application running in the pod? I am lookin for something similar to maven-resource-plugin that can be done run time.
If it's just those two, then likely you will get more mileage out of using the SPRING_APPLICATION_JSON environment variable, which should supersede anything in the configmap:
containers:
- name: my-spring-app
image: whatever
env:
- name: ENV_NAME
value: dev2
- name: SPRING_APPLICATION_JSON
value: |
{"integrations-queue-name": "integration-$(ENV_NAME)",
"search-queue-name": "searchindex-$(ENV_NAME)"}
materializes as:
$ kubectl exec my-spring-pod -- printenv
ENV_NAME=dev2
SPRING_APPLICATION_JSON={"integrations-queue-name": "integration-dev2",
"search-queue-name": "searchindex-dev2"}

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

Resources