how i can add a http api stage in serverless - aws-lambda

I am trying to deploy a serverless application to different stages (prod and dev). I want to deploy it to a single API gateway on two different stages
like:-
http://vfdfdf.execute-api.us-west-1.amazonaws.com/dev/
http://vfdfdf.execute-api.us-west-1.amazonaws.com/prod/
I have written a code in serverless -
provider:
name: aws
runtime: nodejs14.x
region: ${self:custom.${self:custom.stage}.lambdaRegion}
httpApi:
id: ${self:custom.${self:custom.stage}.httpAPIID}
stage: ${opt:stage, 'dev'}

Edited to reflect the comments
That can be done during the serverless deployment phase.
I would just have the dev by default in the serverless yml file
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-1
httpApi:
# Attach to an externally created HTTP API via its ID:
id: w6axy3bxdj
# or commented on the very first deployment so serverless creates the HTTP API
custom:
stage: ${opt:stage, self:provider.stage}
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /${self:custom.stage}/hello
method: get
Then, the command:
serverless deploy
deploys in stage dev and region here eu-west-1. It's using the default values.
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/dev/hello
While for production, the default values can be overridden on the command line. Then I would use the command:
serverless deploy --stage prod
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/prod/hello
In my understanding, you do not change the region between dev and prod; but in case you would want to do that. The production deployment could be:
serverless deploy --stage prod --region eu-west-2
to deploy in a different region than the default one from the serverless yml file.

Related

AWS::CloudFormation::Stack creation through serverless framework failed on localstack

I'm deploying lambda on localstack using serverless framework. I've configured aws credentials in ~/.aws/credentials. When I run the deploy command, get the following error. Couldn't figure out the cause of failure...
Command: serverless deploy --stage local --aws-profile default
Output:
✖ Stack lambda-api-local failed to deploy (12s)
Environment: darwin, node 16.14.0, framework 3.17.0 (local) 3.17.0v (global), plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
CREATE_FAILED: lambda-api-local (AWS::CloudFormation::Stack)
undefined
This is my ~/.aws/credentials
[default]
aws_access_key_id = test
aws_secret_access_key = test
This is my serverless.yml
service: lambda-api
plugins:
- serverless-localstack
provider:
name: aws
stage: local
runtime: go1.x
profile: localstack
package:
patterns:
- '!./**'
- './bin/**'
functions:
hello:
handler: bin/lambda-practice
events:
- http:
path: /hello
method: get
custom:
localstack:
debug: true
endpointFile: localstack_endpoints.json
stages:
# Stages for which the plugin should be enabled
- local
host: http://localhost
edgePort: 4567
autostart: true
lambda:
mountCode: true
docker:
sudo: false
I'm trying to deploy and run lambda on localstack through serverless framework

configure serverless.yml laravel bref

i tried to deploy laravel serverless app with (bref) to AWS api gateway and lambda function
but i keet getting (strict-origin-when-cross-origin) with a black screen output:
**{"message":"Internal Server Error"}**
and this is my serverless.yml file
service: laravelproject
provider:
name: aws
# The AWS region in which to deploy (us-east-1 is the default)
region: eu-central-1
# The stage of the application, e.g. dev, production, staging… ('dev' is the default)
stage: dev
runtime: provided.al2
package:
# Directories to exclude from deployment
patterns:
- '!node_modules/'
- '!public/storage'
- '!resources/assets/'
- '!storage/'
- '!tests/'
functions:
# This function runs the Laravel website/API
web:
handler: public/index.php
timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)
layers:
- ${bref:layer.php-80-fpm}
events:
- httpApi: '*'
# This function lets us run artisan commands in Lambda
artisan:
handler: artisan
timeout: 120 # in seconds
layers:
- ${bref:layer.php-80} # PHP
- ${bref:layer.console} # The "console" layer
plugins:
# We need to include the Bref plugin
- ./vendor/bref/bref
Try increasing the php version you are specifying
Please fix ${bref:layer.php-80} as follows
${bref:layer.php-81}
I got it working with this

Serverless framework - Deploying a function with a different runtime

I'm using a Serverless framework to deploy multiple Lambda functions in which all of them runs on NodeJS. Now I need to create a new lambda function that runs on Java11 and I want its configuration to be in the same yaml file along with my other lambda functions. My jar file is uploaded to an S3 bucket and I'm referencing that bucket in my Serverless config to fetch the package from there and deploy it to the function, however, it seems like a wrong package is being deployed to my function as I noticed that the file size is larger than the actual size of my JAR file. Therefore, when I run my lambda function it fails as it cannot find the handler due to incorrect package deployed. I verified it by uploading manually the jar file to my Java lambda function and it worked.
Below is the code snippet of my yaml file:
---
service: api
provider:
name: aws
stackName: ${self:custom.prefix}-${opt:stage}-${self:service.name}
runtime: nodejs14.x
stage: ${opt:custom_stage, opt:stage}
tracing:
lambda: Active
timeout: 30
logRetentionInDays: 180
environment:
STAGE: ${opt:stage, opt:stage}
ENVIRONMENT: ${self:provider.stage}
SUPPRESS_NO_CONFIG_WARNING: true
ALLOW_CONFIG_MUTATIONS: true
functions:
sample-function-1:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-1
name: ${self:custom.prefix}-${opt:stage}-sample-function-1
handler: authorizers/handler.authHandler1
sample-function-2:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-2
name: ${self:custom.prefix}-${opt:stage}-sample-function-2
handler: authorizers/handler.authHandler1
myJavaFunction:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-myJavaFunction-role
name: ${self:custom.prefix}-${opt:stage}-myJavaFunction
runtime: java11
package:
artifact: s3://myBucket/myJarFile.jar
handler: com.myFunction.LambdaFunctionHandler
memorySize: 512
timeout: 900
How can I deploy the correct package to my lambda function by fetching the jar file from S3 bucket?

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

lambda#edge cloudfront resource creation

I'm a little lost here, I'm trying to deploy a simple function that uses Lambda#edge but I having some problems creating the Cloudfront resource and attaching that CF to the lambda function.
Here is an example of the serverless.yml
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}
The function definition:
functions:
lambda-at-edge-function:
description: Lambda at edge authentication
handler: serverless/index.handler
events:
- cloudFront:
eventType: viewer-response
origin: s3://some.s3.amazonaws.com/
One thing if I don't define the Cloudfront resources it's not created and If I define the resource and attach that to the serverless definition it's create the resource, but then I don' know how to attach that cloudfront to the function.
Edit:
So I'm deploying everithing with sls deploy, so my question now is how can I attach the funtion name to be used in LambdaFunctionAssociations from cloudfront distribution.
When using Lambda#edge you have to respect the limits.
Check them out here:
Requirements and Restrictions on Lambda Functions
This should work:
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
memorySize: 128
timeout: 5
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}

Resources