Reducing over 30 seconds cold start on AWS API Gateway + Lambda - aws-lambda

I've been facing an extremely slow cold start on Lambda Functions deployed in Docker containers together with an API Gateway.
Tech Stack:
FastAPI
Mangum (https://mangum.io/)
API Gateway
AWS Lambda
To do the deployment, I've been using AWS SAM with the following template file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
demo
Resources:
AppFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 118
MemorySize: 3008
CodeUri: app/
PackageType: Image
Events:
ApiEvent:
Properties:
RestApiId:
Ref: FastapiExampleGateway
Path: /{proxy+}
Method: ANY
Auth:
ApiKeyRequired: true
Type: Api
Metadata:
Dockerfile: Dockerfile
DockerContext: .
FastapiExampleGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
OpenApiVersion: '3.0.0'
# Timeout: 30
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: PER_API
UsagePlanName: GatewayAuthorization
Outputs:
Api:
Description: "API Gateway endpoint URL for Prod stage for App function"
Value: !Sub "https://${FastapiExampleGateway}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
The lambda is relatively light, with the following requirements installed:
jsonschema==4.16.0
numpy==1.23.3
pandas==1.5.0
pandas-gbq==0.17.8
fastapi==0.87.0
uvicorn==0.19.0
PyYAML==6.0
SQLAlchemy==1.4.41
pymongo==4.3.2
google-api-core==2.10.1
google-auth==2.11.0
google-auth-oauthlib==0.5.3
google-cloud-bigquery==3.3.2
google-cloud-bigquery-storage==2.16.0
google-cloud-core==2.3.2
google-crc32c==1.5.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4
mangum==0.11.0
And the Dockerfile I'm using for the deployment is:
FROM public.ecr.aws/lambda/python:3.9
WORKDIR /code
RUN pip install pip --upgrade
COPY ./api/requirements.txt /code/api/requirements.txt
RUN pip install --no-cache-dir -r /code/api/requirements.txt
COPY ./api /code/api
EXPOSE 7777
CMD ["api.main.handler"]
ENV PYTHONPATH "${PYTHONPATH}:/code/"
Leading to a 250mb image.
On the first Lambda pull, I'm seeing
which looks like it's a very long start before the actual lambda execution. It reaches the point where API gateway times out due to the maximum 30 second response!
Local tests using sam local start-api work fine.
I've tried increasing the lambda function RAM to higher values.
Not sure if this a problem with Mangum (wrapper for FastAPI)?

Related

how i can add a http api stage in serverless

I am trying to deploy a serverless application to different stages (prod and dev). I want to deploy it to a single API gateway on two different stages
like:-
http://vfdfdf.execute-api.us-west-1.amazonaws.com/dev/
http://vfdfdf.execute-api.us-west-1.amazonaws.com/prod/
I have written a code in serverless -
provider:
name: aws
runtime: nodejs14.x
region: ${self:custom.${self:custom.stage}.lambdaRegion}
httpApi:
id: ${self:custom.${self:custom.stage}.httpAPIID}
stage: ${opt:stage, 'dev'}
Edited to reflect the comments
That can be done during the serverless deployment phase.
I would just have the dev by default in the serverless yml file
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: eu-west-1
httpApi:
# Attach to an externally created HTTP API via its ID:
id: w6axy3bxdj
# or commented on the very first deployment so serverless creates the HTTP API
custom:
stage: ${opt:stage, self:provider.stage}
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /${self:custom.stage}/hello
method: get
Then, the command:
serverless deploy
deploys in stage dev and region here eu-west-1. It's using the default values.
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/dev/hello
While for production, the default values can be overridden on the command line. Then I would use the command:
serverless deploy --stage prod
endpoint: GET - https://w6axy3bxdj.execute-api.eu-west-1.amazonaws.com/prod/hello
In my understanding, you do not change the region between dev and prod; but in case you would want to do that. The production deployment could be:
serverless deploy --stage prod --region eu-west-2
to deploy in a different region than the default one from the serverless yml file.

Cannot add lambda layer via GUI or programmatically, but works via cloud formation. Failed to unzip archive: Zip file contains invalid files/folders;

I'm following this excellent article: https://github.com/vittorio-nardone/selenium-chromium-lambda/
End to end the example works correctly - I just want to re-use the layers that are created in my own function.
Whatever method I use to try and add the layer fails. Manually using GUI,Boto3 in python or the AWS CLI, although it is working on the function setup up by the cloud formation script.
aws lambda update-function-configuration --function-name='test_headless' --layers='arn:aws:lambda:eu-west-1:366134052888:layer:SeleniumChromiumLayer:1'
An error occurred (InvalidParameterValueException) when calling the UpdateFunctionConfiguration operation: Failed to unzip archive: Zip file contains invalid files/folders;
Clearly I'm missing something here:
Partial Extract from cloud formation script:
ScreenshotFunction:
Type: AWS::Lambda::Function
Properties:
Runtime: python3.7
Description: Function to take a screenshot of a website.
Handler: src/lambda_function.lambda_handler
Role:
Fn::GetAtt: [ "ScreenshotFunctionRole", "Arn" ]
Environment:
Variables:
PYTHONPATH: "/var/task/src:/opt/python"
PATH: "/opt/bin:/opt/bin/lib"
URL:
Ref: WebSite
BUCKET:
Ref: BucketName
DESTPATH:
Ref: ScreenshotsFolder
Timeout: 60
MemorySize: 2048
Code:
S3Bucket:
Ref: BucketName
S3Key:
Fn::Sub: '${SourceFolder}/ScreenshotFunction.zip'
Layers:
- Ref: SeleniumChromiumLayer
SeleniumChromiumLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- python3.7
- python3.6
Content:
S3Bucket:
Ref: BucketName
S3Key:
Fn::Sub: '${SourceFolder}/SeleniumChromiumLayer.zip'
Description: Selenium and Chromium Layer for Python3.6
How is it that the contents of the zip used can be OK to add via cloudformation but not in any other manner?
Seems there was some corruption on a function - add the layer to another function worked successfully

How to debug and run multiple lambdas locally

I would like to build .NET HTTP API using aws lambdas. These lambdas will be called by UI and some other systems via api gateway. Obviously in local environment I would like to run/debug these.
What I have tried:
a) Using the mock tool that comes with AWS Visual Studio templates. You can call individual lambdas but I couldn't figure out how I can call them from e.g. postman using normal rest calls. I don't know how mock tool makes those calls as chrome/firefox doesn't show them.
b) Using sam local start-api. Here is what I did:
sam --version
SAM CLI, version 1.22.0
sam init (choose aws quick start template, package type Image and amazon/dotnet5.0-base as base image)
I can build the solution with sam build, run it wit sam local start-api and I can browse to http://localhost:3000/hello and it works. Problem is that I would need to do build in VS + do those steps every time I change code. Also no easy way to attach debugger.
So what is the recommended way to do this? I know you can run whole .NET web api inside lambda but that doesn't sound like a good technical solution. I am assuming I am not the first person building HTTP api using lambdas.
It might be worth considering running a lambda-like environment in Docker.
While including the dotnet tools you need might not be feasable in actual Lambda, It might be feasible to either include them in a Docker image, or bind mounted to a docker container. These images from lambci can help with that: https://hub.docker.com/r/lambci/lambda/
You can use sam local
https://github.com/thoeni/aws-sam-local
Create API with API gateway example
Resources:
ApiGatewayToLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: ['apigateway.amazonaws.com']
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaRole
- arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs
ApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: test
EndpointConfiguration: REGIONAL
DefinitionBody:
swagger: "2.0"
info:
title: "TestAPI"
description: TestAPI description in Markdown.
paths:
/create:
post:
x-amazon-apigateway-integration:
uri:
!Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MyLambda.Arn}/invocations
credentials: !GetAtt ApiGatewayToLambdaRole.Arn
responses: {}
httpMethod: POST
type: aws
x-amazon-apigateway-request-validators:
Validate query string parameters and headers:
validateRequestParameters: true
validateRequestBody: false
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: CodeBuildAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- logs:*
- lambda:*
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Effect: Allow
Resource: "*"
Version: '2012-10-17'
MyLambda:
Type: AWS::Serverless::Function
Properties:
Role: !GetAtt LambdaRole.Arn
Handler: myfunctionname.lambda_handler
CodeUri: ./src/myfunctionname
Events:
SCAPIGateway:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /create
Method: POST
...
Build :
Time sam build --use-container --template backend/template.yam
Invoke Lambda Locally:
The command to invoke Lambda locally is sam local invoke and -e flag is used to specify the path to the Lambda event.
$ sam local invoke -e event.json
When it is run, it will look something like this:
$ sam local invoke MyLambda -e event.json
2021-04-20 11:11:09 Invoking index.handler
2021-04-20 11:11:09 Found credentials in shared credentials file:
~/.aws/credentials
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-invoke.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-start-api.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
https://github.com/ashiina/lambda-local

lambda#edge cloudfront resource creation

I'm a little lost here, I'm trying to deploy a simple function that uses Lambda#edge but I having some problems creating the Cloudfront resource and attaching that CF to the lambda function.
Here is an example of the serverless.yml
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}
The function definition:
functions:
lambda-at-edge-function:
description: Lambda at edge authentication
handler: serverless/index.handler
events:
- cloudFront:
eventType: viewer-response
origin: s3://some.s3.amazonaws.com/
One thing if I don't define the Cloudfront resources it's not created and If I define the resource and attach that to the serverless definition it's create the resource, but then I don' know how to attach that cloudfront to the function.
Edit:
So I'm deploying everithing with sls deploy, so my question now is how can I attach the funtion name to be used in LambdaFunctionAssociations from cloudfront distribution.
When using Lambda#edge you have to respect the limits.
Check them out here:
Requirements and Restrictions on Lambda Functions
This should work:
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
memorySize: 128
timeout: 5
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}

How to access private AWS resources in AWS SAM LOCAL when start-api testing

I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET
After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/

Resources