lambda#edge cloudfront resource creation - aws-lambda

I'm a little lost here, I'm trying to deploy a simple function that uses Lambda#edge but I having some problems creating the Cloudfront resource and attaching that CF to the lambda function.
Here is an example of the serverless.yml
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}
The function definition:
functions:
lambda-at-edge-function:
description: Lambda at edge authentication
handler: serverless/index.handler
events:
- cloudFront:
eventType: viewer-response
origin: s3://some.s3.amazonaws.com/
One thing if I don't define the Cloudfront resources it's not created and If I define the resource and attach that to the serverless definition it's create the resource, but then I don' know how to attach that cloudfront to the function.
Edit:
So I'm deploying everithing with sls deploy, so my question now is how can I attach the funtion name to be used in LambdaFunctionAssociations from cloudfront distribution.

When using Lambda#edge you have to respect the limits.
Check them out here:
Requirements and Restrictions on Lambda Functions
This should work:
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
memorySize: 128
timeout: 5
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}

Related

Serverless framework - Deploying a function with a different runtime

I'm using a Serverless framework to deploy multiple Lambda functions in which all of them runs on NodeJS. Now I need to create a new lambda function that runs on Java11 and I want its configuration to be in the same yaml file along with my other lambda functions. My jar file is uploaded to an S3 bucket and I'm referencing that bucket in my Serverless config to fetch the package from there and deploy it to the function, however, it seems like a wrong package is being deployed to my function as I noticed that the file size is larger than the actual size of my JAR file. Therefore, when I run my lambda function it fails as it cannot find the handler due to incorrect package deployed. I verified it by uploading manually the jar file to my Java lambda function and it worked.
Below is the code snippet of my yaml file:
---
service: api
provider:
name: aws
stackName: ${self:custom.prefix}-${opt:stage}-${self:service.name}
runtime: nodejs14.x
stage: ${opt:custom_stage, opt:stage}
tracing:
lambda: Active
timeout: 30
logRetentionInDays: 180
environment:
STAGE: ${opt:stage, opt:stage}
ENVIRONMENT: ${self:provider.stage}
SUPPRESS_NO_CONFIG_WARNING: true
ALLOW_CONFIG_MUTATIONS: true
functions:
sample-function-1:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-1
name: ${self:custom.prefix}-${opt:stage}-sample-function-1
handler: authorizers/handler.authHandler1
sample-function-2:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-sample-function-2
name: ${self:custom.prefix}-${opt:stage}-sample-function-2
handler: authorizers/handler.authHandler1
myJavaFunction:
role: arn:aws:iam::#{AWS::AccountId}:role/${self:custom.prefix}-${self:provider.stage}-myJavaFunction-role
name: ${self:custom.prefix}-${opt:stage}-myJavaFunction
runtime: java11
package:
artifact: s3://myBucket/myJarFile.jar
handler: com.myFunction.LambdaFunctionHandler
memorySize: 512
timeout: 900
How can I deploy the correct package to my lambda function by fetching the jar file from S3 bucket?

how i can add a http api stage in serverless

I am trying to deploy a serverless application to different stages (prod and dev). I want to deploy it to a single API gateway on two different stages
like:-
http://vfdfdf.execute-api.us-west-1.amazonaws.com/dev/
http://vfdfdf.execute-api.us-west-1.amazonaws.com/prod/
I have written a code in serverless -
provider:
name: aws
runtime: nodejs14.x
region: ${self:custom.${self:custom.stage}.lambdaRegion}
httpApi:
id: ${self:custom.${self:custom.stage}.httpAPIID}
stage: ${opt:stage, 'dev'}
Edited to reflect the comments
That can be done during the serverless deployment phase.
I would just have the dev by default in the serverless yml file
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-1
httpApi:
# Attach to an externally created HTTP API via its ID:
id: w6axy3bxdj
# or commented on the very first deployment so serverless creates the HTTP API
custom:
stage: ${opt:stage, self:provider.stage}
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /${self:custom.stage}/hello
method: get
Then, the command:
serverless deploy
deploys in stage dev and region here eu-west-1. It's using the default values.
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/dev/hello
While for production, the default values can be overridden on the command line. Then I would use the command:
serverless deploy --stage prod
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/prod/hello
In my understanding, you do not change the region between dev and prod; but in case you would want to do that. The production deployment could be:
serverless deploy --stage prod --region eu-west-2
to deploy in a different region than the default one from the serverless yml file.

How to debug and run multiple lambdas locally

I would like to build .NET HTTP API using aws lambdas. These lambdas will be called by UI and some other systems via api gateway. Obviously in local environment I would like to run/debug these.
What I have tried:
a) Using the mock tool that comes with AWS Visual Studio templates. You can call individual lambdas but I couldn't figure out how I can call them from e.g. postman using normal rest calls. I don't know how mock tool makes those calls as chrome/firefox doesn't show them.
b) Using sam local start-api. Here is what I did:
sam --version
SAM CLI, version 1.22.0
sam init (choose aws quick start template, package type Image and amazon/dotnet5.0-base as base image)
I can build the solution with sam build, run it wit sam local start-api and I can browse to http://localhost:3000/hello and it works. Problem is that I would need to do build in VS + do those steps every time I change code. Also no easy way to attach debugger.
So what is the recommended way to do this? I know you can run whole .NET web api inside lambda but that doesn't sound like a good technical solution. I am assuming I am not the first person building HTTP api using lambdas.
It might be worth considering running a lambda-like environment in Docker.
While including the dotnet tools you need might not be feasable in actual Lambda, It might be feasible to either include them in a Docker image, or bind mounted to a docker container. These images from lambci can help with that: https://hub.docker.com/r/lambci/lambda/
You can use sam local
https://github.com/thoeni/aws-sam-local
Create API with API gateway example
Resources:
ApiGatewayToLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: ['apigateway.amazonaws.com']
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaRole
- arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs
ApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: test
EndpointConfiguration: REGIONAL
DefinitionBody:
swagger: "2.0"
info:
title: "TestAPI"
description: TestAPI description in Markdown.
paths:
/create:
post:
x-amazon-apigateway-integration:
uri:
!Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MyLambda.Arn}/invocations
credentials: !GetAtt ApiGatewayToLambdaRole.Arn
responses: {}
httpMethod: POST
type: aws
x-amazon-apigateway-request-validators:
Validate query string parameters and headers:
validateRequestParameters: true
validateRequestBody: false
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: CodeBuildAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- logs:*
- lambda:*
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Effect: Allow
Resource: "*"
Version: '2012-10-17'
MyLambda:
Type: AWS::Serverless::Function
Properties:
Role: !GetAtt LambdaRole.Arn
Handler: myfunctionname.lambda_handler
CodeUri: ./src/myfunctionname
Events:
SCAPIGateway:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /create
Method: POST
...
Build :
Time sam build --use-container --template backend/template.yam
Invoke Lambda Locally:
The command to invoke Lambda locally is sam local invoke and -e flag is used to specify the path to the Lambda event.
$ sam local invoke -e event.json
When it is run, it will look something like this:
$ sam local invoke MyLambda -e event.json
2021-04-20 11:11:09 Invoking index.handler
2021-04-20 11:11:09 Found credentials in shared credentials file:
~/.aws/credentials
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-invoke.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-start-api.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
https://github.com/ashiina/lambda-local

Multiple microservices inside same AWS API Gateway

I'm developing a series of microservices which need to share the same AWS API Gateway. Here's my structure:
/
/assessments
/skills
/work-values
/graphql
/skills, /work-values, and /graphql are 3 different microservices I'm trying to register with the same AWS API Gateway. The problem I'm having is getting the serverless.yaml files for /skills, /work-values routes to nest under 'assessments'. There is no functionality for /assessments in-and-of-itself. It exists just so we can organize all of our assessments under the same URL path structure.
Here's my serverless.yaml file for `/work-values':
service:
name: assessments-workvalues
...
custom:
stage: ${opt:stage, self:provider.stage}
provider:
...
apiGateway:
restApiId:
# THE FOLLOWING REFERENCES A VARIABLE FROM MY API GATEWAY ROOT
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
restApiRootResourceId:
'Fn::ImportValue': # HOW DO I GET THE PROPER VALUE HERE TO MAP TO `/assessments`?
...
functions:
...
Here's my serverless.yaml file for `/assessments':
service:
name: assessments
custom:
stage: ${opt:stage, self:provider.stage}
provider:
...
apiGateway:
restApiId:
# THE FOLLOWING REFERENCES A VARIABLE FROM MY API GATEWAY ROOT
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiId
restApiRootResourceId:
'Fn::ImportValue': ${self:custom.stage}-ApiGatewayRestApiRootResourceId
functions:
...
resources:
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:custom.stage}-Assessments-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:custom.stage}-Assessments-ApiGatewayRestApiRootResourceId
The problem seems to be coding the Outputs in serverless.yaml file for assessments route. When I run serverless deploy, I get this error message:
Error: The CloudFormation template is invalid: Unresolved resource dependencies [ApiGatewayRestApi] in the Outputs block of the template
At the end of Share an API Endpoint Between Services article, the author mentions 'You HAVE TO import /billing from the billing-api, so the new service will only need to create the /billing/xyz part.' (which seems to be the situation I'm in). But, the author does not explain how to import /billing. Or in my case, how do I import /assessments into the serverless.yaml files for each assessment microservice?
After further research, I found this link:
Splitting Your Serverless Framework API on AWS
I ended up reworking my original approach following what's in the article above. The piece I was missing was having a root or base serverless file which is used to create your routing in AWS API Gateway and expose those placeholders as output which your subsequent child serverless files consume as input for wiring up your child lambda functions to routes under the API Gateway umbrella.

How to access private AWS resources in AWS SAM LOCAL when start-api testing

I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET
After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/

Resources