Invalid Layer Arn Error when using ARN value from SSM parameters - aws-lambda

Lambda layer ARN is stored in SSM parameter and need to access the value of this parameter to put as an Layer arn while defining a function and attaching a layer to it.
ERROR: SayHelloLayerARN is an Invalid Layer Arn.
Parameter Name in Parameter Store: SayHelloLayerARN
Here is the SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
CREATE-WITH-SSM
Sample SAM Template for CREATE-WITH-SSM
Parameters:
HelloLayerARN:
Type: AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Environment:
Variables:
LAYER_NAME: !Ref HelloLayerARN
Layers:
- !Ref HelloLayerARN
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get

It seems, SAM doesn't resolve SSM parameters.
Please try using --parameter-overrides option
Example: sam build --parameter-overrides HelloLayerARN=LambdaLayerARN
Note: You must change the HelloLayerARN Type to normal String, other sam deploy fails with SSM parameter resolving error.
Parameters:
HelloLayerARN:
Type: String #AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Please refer the known issue: https://github.com/aws/aws-sam-cli/issues/1069

The --parameter-overrides solution mentioned by #user17589914 works for build and deploy but it does not work for local invoke (I will be very happy to be proven wrong). Below are some details on my findings and workaround:
The layers specific issue aside, there is an open issue with inconsistency between --env-vars vs --parameter-overrides for build, deploy and local invoke, just FYI.
https://github.com/aws/aws-sam-cli/issues/1163
So in general, I am using --env-vars for local invoke with dev parameters defined in a json file. And for build & deploy, I use the --parameter-overrides with parameters for multiple envs defined in a samconfig.toml.
For the Layer ARN reference not working issue, I have not been able to get local invoke to work by passing the ARN as a parameter with either --env-vars or --parameter-overrides. So, I ended by leaving the layer ARN hard-coded in my sam template.
Looking forward to see if I am missing something and someone has this working for local invoke as well.

Do you try other SAM CLI version ?
I got the same error message for SAM CLI version 1.21.1 but not 1.29.0. I POC via SAM container image public.ecr.aws/sam/build-nodejs14.x on local machine (macOS) :
#!/bin/sh
# SAM_VERSION=1.21.1
SAM_VERSION=1.29.0
CONTAINER=public.ecr.aws/sam/build-nodejs14.x:$SAM_VERSION
EXEC_DIR=/path/to/sam
TARGET_FUNCTION=YOUR_FUNCTION_NAME
docker run \
--rm -it $CONTAINER \
sam --version
docker run \
--env SAM_CLI_TELEMETRY=0 \
--env-file $EXEC_DIR/.env \
-v $EXEC_DIR/functions:/functions \
--rm -it $CONTAINER \
sh -c "cd /functions/$TARGET_FUNCTION && sam build"
SAM CLI requires AWS credential, you need to provide environment variables below, ex. my .env file :
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=YOUR_TARGET_REGION
AWS_REGION=YOUR_TARGET_REGION
and don't forget create IAM that allows iam:ListPolicy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUsersToPerformUserActions",
"Effect": "Allow",
"Action": [
"iam:ListPolicies"
],
"Resource": "*"
}
]
}
Result

Related

Reducing over 30 seconds cold start on AWS API Gateway + Lambda

I've been facing an extremely slow cold start on Lambda Functions deployed in Docker containers together with an API Gateway.
Tech Stack:
FastAPI
Mangum (https://mangum.io/)
API Gateway
AWS Lambda
To do the deployment, I've been using AWS SAM with the following template file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
demo
Resources:
AppFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 118
MemorySize: 3008
CodeUri: app/
PackageType: Image
Events:
ApiEvent:
Properties:
RestApiId:
Ref: FastapiExampleGateway
Path: /{proxy+}
Method: ANY
Auth:
ApiKeyRequired: true
Type: Api
Metadata:
Dockerfile: Dockerfile
DockerContext: .
FastapiExampleGateway:
Type: AWS::Serverless::Api
Properties:
StageName: prod
OpenApiVersion: '3.0.0'
# Timeout: 30
Auth:
ApiKeyRequired: true
UsagePlan:
CreateUsagePlan: PER_API
UsagePlanName: GatewayAuthorization
Outputs:
Api:
Description: "API Gateway endpoint URL for Prod stage for App function"
Value: !Sub "https://${FastapiExampleGateway}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
The lambda is relatively light, with the following requirements installed:
jsonschema==4.16.0
numpy==1.23.3
pandas==1.5.0
pandas-gbq==0.17.8
fastapi==0.87.0
uvicorn==0.19.0
PyYAML==6.0
SQLAlchemy==1.4.41
pymongo==4.3.2
google-api-core==2.10.1
google-auth==2.11.0
google-auth-oauthlib==0.5.3
google-cloud-bigquery==3.3.2
google-cloud-bigquery-storage==2.16.0
google-cloud-core==2.3.2
google-crc32c==1.5.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.4
mangum==0.11.0
And the Dockerfile I'm using for the deployment is:
FROM public.ecr.aws/lambda/python:3.9
WORKDIR /code
RUN pip install pip --upgrade
COPY ./api/requirements.txt /code/api/requirements.txt
RUN pip install --no-cache-dir -r /code/api/requirements.txt
COPY ./api /code/api
EXPOSE 7777
CMD ["api.main.handler"]
ENV PYTHONPATH "${PYTHONPATH}:/code/"
Leading to a 250mb image.
On the first Lambda pull, I'm seeing
which looks like it's a very long start before the actual lambda execution. It reaches the point where API gateway times out due to the maximum 30 second response!
Local tests using sam local start-api work fine.
I've tried increasing the lambda function RAM to higher values.
Not sure if this a problem with Mangum (wrapper for FastAPI)?

SSM parameters SAM local aws set up: unable to set properties to work in local

people.
I have been trying to set up a project to test my lambdas in a local environment. We are using sam to mimic AWS resources and everything works pretty fine with one exception: for business reasons, I had to include SSM parameters. When we try to read the parameters using SAM with sam local start-lambda ... whatever, the lambda code is not able to recover the parameters. I know that AWS credentials are fine as I can connect to AWS, but we don't want to use real AWS services to do that, we want to pass, set, define; whatever you want to call it is fine, SSM parameters in my local environment and then use them with sam local start-lambda whatever, without AWS connection therefore we can use parameters for testing only in local.
I have read the following post, How to access SSM Parameter Store from SAM lambda local in node
And this issue in github:
https://github.com/aws/aws-sam-cli/issues/616
It is mentioned that the way to do it is by using --env-vars but it is not working so far.
This is my template.
Parameters:
IdentityNameParameter:
Type: AWS::SSM::Parameter::Value<String>
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler::handleRequest
CodeUri: lambdaFunction
Runtime: java11
Timeout: 40
Environment:
Variables:
AWS_ACCESS_KEY_ID: "keyid"
AWS_SECRET_ACCESS_KEY: "accesskey"
AWS_DEFAULT_REGION: "us-west-1"
AWS_REGION: "us-west-1"
This is what I use to start the lambda
sam local start-lambda --host 0.0.0.0 -d 5859 --docker-volume-basedir /folderWithClasses --container-host host.docker.internal --debug --env-vars env.json
This is the env.json:
{
"Parameters": {
"IdentityNameParameter": "admin"
}
}
I guess there is no support to do this. If so, what's the point of having SAM to test it in local if you need AWS to actually test?
Any clue?

How to debug and run multiple lambdas locally

I would like to build .NET HTTP API using aws lambdas. These lambdas will be called by UI and some other systems via api gateway. Obviously in local environment I would like to run/debug these.
What I have tried:
a) Using the mock tool that comes with AWS Visual Studio templates. You can call individual lambdas but I couldn't figure out how I can call them from e.g. postman using normal rest calls. I don't know how mock tool makes those calls as chrome/firefox doesn't show them.
b) Using sam local start-api. Here is what I did:
sam --version
SAM CLI, version 1.22.0
sam init (choose aws quick start template, package type Image and amazon/dotnet5.0-base as base image)
I can build the solution with sam build, run it wit sam local start-api and I can browse to http://localhost:3000/hello and it works. Problem is that I would need to do build in VS + do those steps every time I change code. Also no easy way to attach debugger.
So what is the recommended way to do this? I know you can run whole .NET web api inside lambda but that doesn't sound like a good technical solution. I am assuming I am not the first person building HTTP api using lambdas.
It might be worth considering running a lambda-like environment in Docker.
While including the dotnet tools you need might not be feasable in actual Lambda, It might be feasible to either include them in a Docker image, or bind mounted to a docker container. These images from lambci can help with that: https://hub.docker.com/r/lambci/lambda/
You can use sam local
https://github.com/thoeni/aws-sam-local
Create API with API gateway example
Resources:
ApiGatewayToLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: ['apigateway.amazonaws.com']
Version: '2012-10-17'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaRole
- arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs
ApiGateway:
Type: AWS::Serverless::Api
Properties:
StageName: test
EndpointConfiguration: REGIONAL
DefinitionBody:
swagger: "2.0"
info:
title: "TestAPI"
description: TestAPI description in Markdown.
paths:
/create:
post:
x-amazon-apigateway-integration:
uri:
!Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MyLambda.Arn}/invocations
credentials: !GetAtt ApiGatewayToLambdaRole.Arn
responses: {}
httpMethod: POST
type: aws
x-amazon-apigateway-request-validators:
Validate query string parameters and headers:
validateRequestParameters: true
validateRequestBody: false
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: ['sts:AssumeRole']
Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Version: '2012-10-17'
Path: /
Policies:
- PolicyName: CodeBuildAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- logs:*
- lambda:*
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
Effect: Allow
Resource: "*"
Version: '2012-10-17'
MyLambda:
Type: AWS::Serverless::Function
Properties:
Role: !GetAtt LambdaRole.Arn
Handler: myfunctionname.lambda_handler
CodeUri: ./src/myfunctionname
Events:
SCAPIGateway:
Type: Api
Properties:
RestApiId: !Ref ApiGateway
Path: /create
Method: POST
...
Build :
Time sam build --use-container --template backend/template.yam
Invoke Lambda Locally:
The command to invoke Lambda locally is sam local invoke and -e flag is used to specify the path to the Lambda event.
$ sam local invoke -e event.json
When it is run, it will look something like this:
$ sam local invoke MyLambda -e event.json
2021-04-20 11:11:09 Invoking index.handler
2021-04-20 11:11:09 Found credentials in shared credentials file:
~/.aws/credentials
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-invoke.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-start-api.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
https://github.com/ashiina/lambda-local

Can SAM create the s3 bucket to store the lambda function code?

Perhaps this is more than one question. Tried to sign up to the SAM Slack channel, but no success.
I am trying out SAM to build a serverless app. I am used to having a Cloudformation template to describe all the resources needed. Now I am confused as to why SAM's cli asks me to pass s3 bucket where to upload lambda function code. I would normally expect the creation of s3 bucket (with a random name) to be part of the Cloudformation template execution. Is SAM an extension over Cloudformation or is it not?
In my template.yaml I have something like this:
Resources:
SrcBucket:
Type: AWS::S3::Bucket
MyFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 3
Runtime: python3.7
Handler: my.lambda_handler
CodeUri: my/
Events:
ShopifyInstall:
Type: Api
Properties:
Path: /
Method: get
How do I reference the SrcBucket in CodeUri?
Well unfortunately no.
The deployment of the SAM template is in two parts one is the package command which basically constructs the zip file and needs an s3 bucket to upload this to.
And the deploy command which simply deploys your packaged application just like what cloudformation would do.
I usually have a small bash script with multiple cloudformation stacks one is the helper stack which creates this bucket (And also adds the name in the outputs) and then fetch the name and pass it along to all the other stacks
#Create the Helper stack
echo "---------Create Helper stack ---------"
aws cloudformation deploy --profile ${profile} --stack-name $helperStack --
region ${region} --template-file deployment-helper.yaml
serverlessCodeBucketName="$(aws cloudformation --region ${region} --profile
${profile} describe-stacks --stack-name $helperStack --query
'Stacks[0].Outputs[?OutputKey==`CodeBucketName`].OutputValue' --output text)"
aws cloudformation package --profile ${profile} --region ${region} --
template-file template.yaml --output -
template-file serverless-output.yaml --s3-bucket
${serverlessCodeBucketName}
aws cloudformation deploy --profile ${profile} --stack-name
${applicationStack} --region ${region} --template-file
serverless-output.yaml --capabilities
CAPABILITY_IAM

serverless + aws lambda fails at 'Uploading CloudFormation file to S3'

I have a lambda deployed via serverless deploy and it fails at
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless Error ---------------------------------------
Access Denied
My company has very tight restrictions around S3. How do I know what S3 bucket is getting access denied so I request access? The serverless.yml looks like this:
service: some-lambda-name
provider:
name: aws
runtime: python3.6
stage: 'staging'
region: us-east-1
role: arn:aws:iam::12345:role/some-lambda
memorySize: 512
deploymentBucket:
name: lambda-bucket-staging
functions:
some-lambda-name:
name: some-lambda-name
handler: some-lambda-name.lambda_handler
memorySize: 128
edit:
In terraform my deployment role total access to the bucket I expect it to deploy to:
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::lambda-bucket-staging",
"arn:aws:s3:::lambda-bucket-staging/*"
]
}
Make sure you have the action-resources set up correctly, and that your aws resource has public access etc.
For example here is are permissions allowing access to fake human resources account bucket; notice resources are not all the same!
JSON action-resource example:

Resources