How to get rid of serverless "Warning: Invalid configuration encountered at root: unrecognized property 'deploymentBucket'" - aws-lambda

I've got a web application running on the serverless framework version 3.7.5. Every time I deploy my lambda function I get this warning:
"Warning: Invalid configuration encountered at root: unrecognised property 'deploymentBucket'".
I have attached the "serverless.yml" file below for external scrutiny. Is my configuration of the "deploymentBucket" property not valid? Do I need to change or edit any of the properties?
Note: Deployment works fine as it's simply a warning and I am able to proceed to testing my api endpoints... I just find this warning a tad bothersome and would like to erase it once and for all. Thanks in advance!
Here's my serverless.yml file
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: poppy-seed
# app and org for use with dashboard.serverless.com
#app: your-app-name
#org: your-org-name
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
frameworkVersion: '3.7.5'
provider:
name: aws
runtime: java11
timeout: 30
lambdaHashingVersion: 20201221
# you can overwrite defaults here
# stage: dev
# region: us-east-1
variable1: value1
# you can add packaging information here
package:
artifact: build/libs/poppy-seed-dev-all.jar
functions:
poppy-seed:
handler: com.serverless.lambda.Handler
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: "{proxy+}"
method: ANY
cors: true
deploymentBucket:
blockPublicAccess: true # Prevents public access via ACLs or bucket policies. Default is false
skipPolicySetup: false # Prevents creation of default bucket policy when framework creates the deployment bucket. Default is false
name: # Deployment bucket name. Default is generated by the framework
maxPreviousDeploymentArtifacts: 5 # On every deployment the framework prunes the bucket to remove artifacts older than this limit. The default is 5
versioning: false # enable bucket versioning. Default is false
deploymentPrefix: serverless # The S3 prefix under which deployed artifacts should be stored. Default is serverless
disableDefaultOutputExportNames: false # optional, if set to 'true', disables default behavior of generating export names for CloudFormation outputs
lambdaHashingVersion: 20201221 # optional, version of hashing algorithm that should be used by the framework
plugins:
- serverless-sam
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"

The warning means that the deploymentBucket property is not recognized and as such it is not doing what you think it should be doing.
According to serverless docs, deploymentBucket should be a property under provider not a root property.

I was able to get rid of this warning by moving the deploymentBucket property under provider instead of registering it as a root property. The modified serverless.yml file is attached below:
service: poppy-seed
provider:
name: aws
runtime: java11
timeout: 30
lambdaHashingVersion: 20201221
deploymentBucket:
blockPublicAccess: true
skipPolicySetup: false
name: poppy-seed
maxPreviousDeploymentArtifacts: 5
versioning: false # enable bucket versioning. Default is false
package:
artifact: build/libs/poppy-seed-dev-all.jar
functions:
poppy-seed:
handler: com.serverless.lambda.Handler
events:
- http:
path: "{proxy+}"
method: ANY
cors: true
plugins:
- serverless-sam
Also read up the serverless documentation for more clarity. Thanks again to #NoelLlevares for the tip.

Also try to update to latest version of serverless in my case some keys were unrecognized in old version

Related

Configure TLS for Sonarqube via Helm Chart

I'm deploying Sonarqube via official helm charts and using following ingress configuration:
ingress:
enabled: true
# Used to create an Ingress record.
hosts:
- name: sonar.<company>.com
# Different clouds or configurations might need /* as the default path
path: /
# For additional control over serviceName and servicePort
# serviceName: someService
# servicePort: somePort
# the pathType can be one of the following values: Exact|Prefix|ImplementationSpecific(default)
# pathType: ImplementationSpecific
annotations:
# kubernetes.io/tls-acme: "true"
# nginx.ingress.kubernetes.io/proxy-body-size: "64m"
# Set the ingressClassName on the ingress record
# ingressClassName: nginx
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls:
# Secrets must be manually created in the namespace. To generate a self-signed certificate (and private key) and then create the secret in the cluster please refer to official documentation available at https://kubernetes.github.io/ingress-nginx/user-guide/tls/#tls-secrets
- secretName: sonar-server-tls
hosts:
- sonar.<company>.com
Sonar is working when using: http://sonar.<company>.com:443 but without the certificate. https://sonar.<company>.com doesnt work. I cannot find much related to this specific topic. Some questions:
Do I have to use nginx here? If yes, is it recommended to use nginx.enabled: true to make stuff working smooth? That secret name is valid, exists and its found during deployment.
Thanks for any advice.
Using HTTP instead of HTTPS is not recommended, as it will not provide the same level of security. It is possible to use Nginx to enable HTTPS,you will likely need to use nginx to act as a reverse proxy for the sonar..com domain, and then configure it to use the secret containing the certificate. It is generally recommended to use Nginx's nginx.enabled: true option to ensure that the setup is working properly, which will then allow you to set up the nginx configuration and use the secret name provided.. Once this is done, you should be able to access Sonar securely on the HTTPS address you specified.
For more information follow this doc.

Serverless Framework - environment variables from file and manual

I have Serverless Framework function for AWS Lambda. I have secrets in AWS Secrets Manager (SSM) Parameter Store and other environment variables in local .yml files, for separate deployments (dev, stg, prod).
How can I use environment variables from both file and SSM?
Only secrets work:
functions:
kinesisEvents:
handler: kinesis_events_processing.lambda_handler
name: kinesis-events-${self:provider.stage}
package: {}
maximumRetryAttempts: 2
events:
- stream:
type: kinesis
... # omitted a few things
environment:
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}
And only file also works:
functions:
kinesisEvents:
... # as above
environment:
${file(${self:provider.stage}.yml):}
But how can I combine those, so I have all those variables set as env vars in final deployment? I tried this, but it does not work and throws error during deploy:
functions:
kinesisEvents:
... # as above
environment:
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}
${file(${self:provider.stage}.yml):}
I found answers here, here and here. Basically, Serverless Framework has no particular support for this feature. However, it supports extended YAML syntax, which has anchor and dictionary merging capabilities.
So first I unpack the env vars from YAML config file, at the top of the file, and anchor it with &env_vars (like a variable for referencing, but in YAML):
env_vars: &env_vars
${file(${self:provider.stage}.yml):}
functions:
...
And then I use it, unpacking this dictionary:
environment:
<<: *env_vars
DB_PASSWORD: ${ssm:/${self:provider.stage}/db/db_password}
API_KEY: ${ssm:/${self:provider.stage}/api/internal_api_key}

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

aws serverless - exporting output value for cognito authorizer

I'm trying to share cognito authorizer between my stacks for this I'm exporting my authorizer but when I try to reference it in another service I get the error
Trying to request a non exported variable from CloudFormation. Stack name: "myApp-services-test" Requested variable: "ExtApiGatewayAuthorizer-test".
Here is my stack where I have authorizer defined and exported:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:provider.stage}-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
ApiGatewayAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
Name: CognitoAuthorizer
Type: COGNITO_USER_POOLS
IdentitySource: method.request.header.Authorization
RestApiId: { "Ref": "ProxyApi" }
ProviderARNs:
- Fn::GetAtt:
- CognitoUserPool
- Arn
ApiGatewayAuthorizerId:
Value:
Ref: ApiGatewayAuthorizer
Export:
Name: ExtApiGatewayAuthorizer-${self:provider.stage}
this is successfully exported as I can see it in stack exports list from my aws console.
I try to reference it in another stack like this:
myFunction:
handler: handler.myFunction
events:
- http:
path: /{userID}
method: put
cors: true
authorizer:
type: COGNITO_USER_POOLS
authorizerId: ${myApp-services-${self:provider.stage}.ExtApiGatewayAuthorizer-${self:provider.stage}}
my env info
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 12.13.1
Framework Version: 1.60.5
Plugin Version: 3.2.7
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
Answering my own question
it looks like I should have imported by output name not output export name, which is bit weird and all the docs I have seen point to export name, but this is how I was able to make it work
replaced this -
authorizerId:${myAppservices-${self:provider.stage}.ExtApiGatewayAuthorizer-${self:provider.stage}}
with -
authorizerId: ${myApp-services-${self:provider.stage}.ApiGatewayAuthorizerId}
If you come across Trying to request a non exported variable from CloudFormation. Stack name: "myApp-services-test" Requested variable: "ExtApiGatewayAuthorizer-test"., when exporting profile i.e.,
export AWS_PROFILE=your_profile
It must be done on the terminal window where you are doing sls deploy not on another terminal window. It is a silly mistake but I don't want anyone else waste their time around that

Pointing Two AWS Lambda Functions to Same Domain

I am using the serverless framework and AWS Lambdas to deploy two function with different path names (/message and /subscribe) to my subdomain at form.example.com.
I am using the serverless-domain-manager plugin for serverless and successfully configured my domain for the /message function using serverless create_domain, but since I also needed to do that for /subscribe I tried to follow the same process receiving messages that the domain already existed and caught an error Error: Unable to create basepath mapping..
After flipping a configuration (createRoute53Record: false) and re-running it started to work, but now when I run sls deploy for my /message function I get the error message I used to see for /subscribe.
Error (from sls deploy):
layers:
None
Error --------------------------------------------------
Error: Unable to create basepath mapping.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Here is my config for the serverless-domain-manager:
plugins:
- serverless-offline
- serverless-domain-manager
custom:
transactionDomain:
dev: ${file(./local-keys.yml):transactionDomain}
prod: ${ssm:mg-production-transaction-domain~true}
newsletterDomain:
dev: ${file(./local-keys.yml):newsletterDomain}
prod: ${ssm:mg-production-newsletter-domain~true}
apiKey:
dev: ${file(./local-keys.yml):apiKey}
prod: ${ssm:mg-production-api-key~true}
customDomain:
domainName: form.example.com
certificateName: 'www.example.com' //sub-domain is included in the certificate
stage: 'prod'
createRoute53Record: true
Does this have to do with the deployment of two functions to the same domain? Is there a proper process to allow that to happen?
If you do not need API gateway specific features, such as usage plan. You can put two lambda behind ALB per path routing.

Resources