How to access private AWS resources in AWS SAM LOCAL when start-api testing - aws-lambda

I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET

After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/

Related

Reducing over 30 seconds cold start on AWS API Gateway + Lambda

I've been facing an extremely slow cold start on Lambda Functions deployed in Docker containers together with an API Gateway.
Tech Stack:
FastAPI
Mangum (https://mangum.io/)
API Gateway
AWS Lambda
To do the deployment, I've been using AWS SAM with the following template file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
demo
Resources:
AppFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 118
MemorySize: 3008
CodeUri: app/
PackageType: Image
Events:
ApiEvent:
Properties:
RestApiId:
Ref: FastapiExampleGateway
Path: /{proxy+}
Method: ANY
Auth:
ApiKeyRequired: true
Type: Api
Metadata:
Dockerfile: Dockerfile
DockerContext: .
FastapiExampleGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
OpenApiVersion: '3.0.0'
# Timeout: 30
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: PER_API
UsagePlanName: GatewayAuthorization
Outputs:
Api:
Description: "API Gateway endpoint URL for Prod stage for App function"
Value: !Sub "https://${FastapiExampleGateway}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
The lambda is relatively light, with the following requirements installed:
jsonschema==4.16.0
numpy==1.23.3
pandas==1.5.0
pandas-gbq==0.17.8
fastapi==0.87.0
uvicorn==0.19.0
PyYAML==6.0
SQLAlchemy==1.4.41
pymongo==4.3.2
google-api-core==2.10.1
google-auth==2.11.0
google-auth-oauthlib==0.5.3
google-cloud-bigquery==3.3.2
google-cloud-bigquery-storage==2.16.0
google-cloud-core==2.3.2
google-crc32c==1.5.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4
mangum==0.11.0
And the Dockerfile I'm using for the deployment is:
FROM public.ecr.aws/lambda/python:3.9
WORKDIR /code
RUN pip install pip --upgrade
COPY ./api/requirements.txt /code/api/requirements.txt
RUN pip install --no-cache-dir -r /code/api/requirements.txt
COPY ./api /code/api
EXPOSE 7777
CMD ["api.main.handler"]
ENV PYTHONPATH "${PYTHONPATH}:/code/"
Leading to a 250mb image.
On the first Lambda pull, I'm seeing
which looks like it's a very long start before the actual lambda execution. It reaches the point where API gateway times out due to the maximum 30 second response!
Local tests using sam local start-api work fine.
I've tried increasing the lambda function RAM to higher values.
Not sure if this a problem with Mangum (wrapper for FastAPI)?

SSM parameters SAM local aws set up: unable to set properties to work in local

people.
I have been trying to set up a project to test my lambdas in a local environment. We are using sam to mimic AWS resources and everything works pretty fine with one exception: for business reasons, I had to include SSM parameters. When we try to read the parameters using SAM with sam local start-lambda ... whatever, the lambda code is not able to recover the parameters. I know that AWS credentials are fine as I can connect to AWS, but we don't want to use real AWS services to do that, we want to pass, set, define; whatever you want to call it is fine, SSM parameters in my local environment and then use them with sam local start-lambda whatever, without AWS connection therefore we can use parameters for testing only in local.
I have read the following post, How to access SSM Parameter Store from SAM lambda local in node
And this issue in github:
https://github.com/aws/aws-sam-cli/issues/616
It is mentioned that the way to do it is by using --env-vars but it is not working so far.
This is my template.
Parameters:
IdentityNameParameter:
Type: AWS::SSM::Parameter::Value<String>
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler::handleRequest
CodeUri: lambdaFunction
Runtime: java11
Timeout: 40
Environment:
Variables:
AWS_ACCESS_KEY_ID: "keyid"
AWS_SECRET_ACCESS_KEY: "accesskey"
AWS_DEFAULT_REGION: "us-west-1"
AWS_REGION: "us-west-1"
This is what I use to start the lambda
sam local start-lambda --host 0.0.0.0 -d 5859 --docker-volume-basedir /folderWithClasses --container-host host.docker.internal --debug --env-vars env.json
This is the env.json:
{
"Parameters": {
"IdentityNameParameter": "admin"
}
}
I guess there is no support to do this. If so, what's the point of having SAM to test it in local if you need AWS to actually test?
Any clue?

how to connect a serverless client with a given AWS Lambda function

I have a question about how to connect a serverless client with a given AWS Lambda function.
I'm building a system that provides the developers with a cloud-based dev environment.
It provides the serverless dev environment built atop the AWS lambda, dynamodb services.
Some developers ask me about how to use the serverless framework in the given environment.
For the company's security policy, I can't grant Adimin authority on the developers, so that they find it difficult to perform the sls deploy cmd that requires CRUD authority in the IAM service.
I've tried connecting the serverless client with the aws lambda provided by my system without executing the deploy cmd. But all failed.
It requires me to execute the sls deploy cmd before the deploy function cmd.
Is there any way to connect a serverless client with a given AWS Lambda function?
If there is a best practice in grating the minimized authority, please give me a suggestion.
Thank you in advance.
First of all you will need to setup different group with dedicated security groups granting rights to your users. Here's a Cloudformation template for instance, (it references default OrganizationAccountAccessRole but you might/should create your own with minimal access) :
Resources:
# ADMIN QA
AssumeAdministratorQARolePolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: "AdminQA"
Description: "Assume the qa administrative role"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "AssumeAdministratorQARolePolicy"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Resource: "arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole"
AdminQAGroup:
Type: AWS::IAM::Group
Properties:
GroupName: AdminQA
ManagedPolicyArns:
- !Ref AssumeAdministratorQARolePolicy
AdminQAUsersToGroup:
Type: AWS::IAM::UserToGroupAddition
Properties:
GroupName: !Ref AdminQAGroup
Users:
- MYUSER
Then MYUSER might use this Role through his .aws/credentials like
[default]
aws_access_key_id = KEY
aws_secret_access_key = SECRET
[qa]
role_arn = arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole
source_profile = default
Once again you might update OrganizationAccountAccessRole to your very own Role.
Finally during deployment you might use this profile with :
serverless deploy --stage qa --aws-profile qa
Which I recommend to set in package.json directly.
Hope this helps and clarifies how you should grant rights and access through the whole deployment process.

How to install large dependencies on AWS EFS via serverless framework

I understand that we can install dependencies in EFS from an EC2 instance and then set the mount path and PythonPath in AWS lambda so that the lambda has now the path of the dependencies folder.
But is there a way to eliminate the EC2 from this approach and rather install those dependencies from serverless framework?
My scenario is to upload a tensorflow2 dependency(which is >500 MB) to an AWS lambda.
Any leads would be helpful and appreciated.
Yes you can. I am not sure if you have already setup EFS on serverless.
But assuming that this have been done, you can then explicitly tell your serverless lambda project what vpc's to connect to and what EFS IAM roles to use.
I do not have the details of the EFS setup but on my project this would look something like this:
name: aws
profile: abcd
runtime: python3.8
region: us-west-1
vpc:
securityGroupIds:
- sg-065647b2292ad63a2
subnetIds:
- subnet-02ad3xxxxxxxxxxxx
- subnet-02ca2xxxxxxxxxxxx
- subnet-01a14xxxxxxxxxxxx
# Allow RW access to EFS services
iamManagedPolicies:
- "arn:aws:iam::aws:policy/AmazonElasticFileSystemClientReadWriteAccess"
under the functions section just make sure that you define your env vars to point to your libs/code:
functions:
myfunc:
runtime: python3.8
handler: myhandler
environment:
PYTHONPATH: /mnt/efs/lib/python3.8/site-packages
LD_LIBRARY_PATH: /mnt/efs/lib/python3.8/site-packages
Finally in resources:
resources:
extensions:
MyfuncLambdaFunction:
Properties:
FileSystemConfigs:
- Arn: arn:aws:elasticfilesystem:us-west-1:123456789012:access-point/fsap-0012abcde1234ab12
LocalMountPath: /mnt/efs
FYI for tensorflow, you can bring it down to around 60MB using a combination of
lambc/docker-lambda and
tensorflow packaging
but in the long run you will be better off with EFS anyway, just was worth mentioning.

lambda#edge cloudfront resource creation

I'm a little lost here, I'm trying to deploy a simple function that uses Lambda#edge but I having some problems creating the Cloudfront resource and attaching that CF to the lambda function.
Here is an example of the serverless.yml
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}
The function definition:
functions:
lambda-at-edge-function:
description: Lambda at edge authentication
handler: serverless/index.handler
events:
- cloudFront:
eventType: viewer-response
origin: s3://some.s3.amazonaws.com/
One thing if I don't define the Cloudfront resources it's not created and If I define the resource and attach that to the serverless definition it's create the resource, but then I don' know how to attach that cloudfront to the function.
Edit:
So I'm deploying everithing with sls deploy, so my question now is how can I attach the funtion name to be used in LambdaFunctionAssociations from cloudfront distribution.
When using Lambda#edge you have to respect the limits.
Check them out here:
Requirements and Restrictions on Lambda Functions
This should work:
service: some-service
plugins:
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs10.x
stage: ${env:STAGE}
region: us-east-1
memorySize: 128
timeout: 5
resources:
- ${file(./resources.yml):resources}
functions:
- ${file(./lambda-at-edge/function.yml):functions}

Resources