SSM parameters SAM local aws set up: unable to set properties to work in local - aws-lambda

people.
I have been trying to set up a project to test my lambdas in a local environment. We are using sam to mimic AWS resources and everything works pretty fine with one exception: for business reasons, I had to include SSM parameters. When we try to read the parameters using SAM with sam local start-lambda ... whatever, the lambda code is not able to recover the parameters. I know that AWS credentials are fine as I can connect to AWS, but we don't want to use real AWS services to do that, we want to pass, set, define; whatever you want to call it is fine, SSM parameters in my local environment and then use them with sam local start-lambda whatever, without AWS connection therefore we can use parameters for testing only in local.
I have read the following post, How to access SSM Parameter Store from SAM lambda local in node
And this issue in github:
https://github.com/aws/aws-sam-cli/issues/616
It is mentioned that the way to do it is by using --env-vars but it is not working so far.
This is my template.
Parameters:
IdentityNameParameter:
Type: AWS::SSM::Parameter::Value<String>
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler::handleRequest
CodeUri: lambdaFunction
Runtime: java11
Timeout: 40
Environment:
Variables:
AWS_ACCESS_KEY_ID: "keyid"
AWS_SECRET_ACCESS_KEY: "accesskey"
AWS_DEFAULT_REGION: "us-west-1"
AWS_REGION: "us-west-1"
This is what I use to start the lambda
sam local start-lambda --host 0.0.0.0 -d 5859 --docker-volume-basedir /folderWithClasses --container-host host.docker.internal --debug --env-vars env.json
This is the env.json:
{
"Parameters": {
"IdentityNameParameter": "admin"
}
}
I guess there is no support to do this. If so, what's the point of having SAM to test it in local if you need AWS to actually test?
Any clue?

Related

how to connect a serverless client with a given AWS Lambda function

I have a question about how to connect a serverless client with a given AWS Lambda function.
I'm building a system that provides the developers with a cloud-based dev environment.
It provides the serverless dev environment built atop the AWS lambda, dynamodb services.
Some developers ask me about how to use the serverless framework in the given environment.
For the company's security policy, I can't grant Adimin authority on the developers, so that they find it difficult to perform the sls deploy cmd that requires CRUD authority in the IAM service.
I've tried connecting the serverless client with the aws lambda provided by my system without executing the deploy cmd. But all failed.
It requires me to execute the sls deploy cmd before the deploy function cmd.
Is there any way to connect a serverless client with a given AWS Lambda function?
If there is a best practice in grating the minimized authority, please give me a suggestion.
Thank you in advance.
First of all you will need to setup different group with dedicated security groups granting rights to your users. Here's a Cloudformation template for instance, (it references default OrganizationAccountAccessRole but you might/should create your own with minimal access) :
Resources:
# ADMIN QA
AssumeAdministratorQARolePolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: "AdminQA"
Description: "Assume the qa administrative role"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Sid: "AssumeAdministratorQARolePolicy"
Effect: "Allow"
Action:
- "sts:AssumeRole"
Resource: "arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole"
AdminQAGroup:
Type: AWS::IAM::Group
Properties:
GroupName: AdminQA
ManagedPolicyArns:
- !Ref AssumeAdministratorQARolePolicy
AdminQAUsersToGroup:
Type: AWS::IAM::UserToGroupAddition
Properties:
GroupName: !Ref AdminQAGroup
Users:
- MYUSER
Then MYUSER might use this Role through his .aws/credentials like
[default]
aws_access_key_id = KEY
aws_secret_access_key = SECRET
[qa]
role_arn = arn:aws:iam::ACCOUNTID:role/OrganizationAccountAccessRole
source_profile = default
Once again you might update OrganizationAccountAccessRole to your very own Role.
Finally during deployment you might use this profile with :
serverless deploy --stage qa --aws-profile qa
Which I recommend to set in package.json directly.
Hope this helps and clarifies how you should grant rights and access through the whole deployment process.

Invalid Layer Arn Error when using ARN value from SSM parameters

Lambda layer ARN is stored in SSM parameter and need to access the value of this parameter to put as an Layer arn while defining a function and attaching a layer to it.
ERROR: SayHelloLayerARN is an Invalid Layer Arn.
Parameter Name in Parameter Store: SayHelloLayerARN
Here is the SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
CREATE-WITH-SSM
Sample SAM Template for CREATE-WITH-SSM
Parameters:
HelloLayerARN:
Type: AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Environment:
Variables:
LAYER_NAME: !Ref HelloLayerARN
Layers:
- !Ref HelloLayerARN
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
It seems, SAM doesn't resolve SSM parameters.
Please try using --parameter-overrides option
Example: sam build --parameter-overrides HelloLayerARN=LambdaLayerARN
Note: You must change the HelloLayerARN Type to normal String, other sam deploy fails with SSM parameter resolving error.
Parameters:
HelloLayerARN:
Type: String #AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Please refer the known issue: https://github.com/aws/aws-sam-cli/issues/1069
The --parameter-overrides solution mentioned by #user17589914 works for build and deploy but it does not work for local invoke (I will be very happy to be proven wrong). Below are some details on my findings and workaround:
The layers specific issue aside, there is an open issue with inconsistency between --env-vars vs --parameter-overrides for build, deploy and local invoke, just FYI.
https://github.com/aws/aws-sam-cli/issues/1163
So in general, I am using --env-vars for local invoke with dev parameters defined in a json file. And for build & deploy, I use the --parameter-overrides with parameters for multiple envs defined in a samconfig.toml.
For the Layer ARN reference not working issue, I have not been able to get local invoke to work by passing the ARN as a parameter with either --env-vars or --parameter-overrides. So, I ended by leaving the layer ARN hard-coded in my sam template.
Looking forward to see if I am missing something and someone has this working for local invoke as well.
Do you try other SAM CLI version ?
I got the same error message for SAM CLI version 1.21.1 but not 1.29.0. I POC via SAM container image public.ecr.aws/sam/build-nodejs14.x on local machine (macOS) :
#!/bin/sh
# SAM_VERSION=1.21.1
SAM_VERSION=1.29.0
CONTAINER=public.ecr.aws/sam/build-nodejs14.x:$SAM_VERSION
EXEC_DIR=/path/to/sam
TARGET_FUNCTION=YOUR_FUNCTION_NAME
docker run \
--rm -it $CONTAINER \
sam --version
docker run \
--env SAM_CLI_TELEMETRY=0 \
--env-file $EXEC_DIR/.env \
-v $EXEC_DIR/functions:/functions \
--rm -it $CONTAINER \
sh -c "cd /functions/$TARGET_FUNCTION && sam build"
SAM CLI requires AWS credential, you need to provide environment variables below, ex. my .env file :
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=YOUR_TARGET_REGION
AWS_REGION=YOUR_TARGET_REGION
and don't forget create IAM that allows iam:ListPolicy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUsersToPerformUserActions",
"Effect": "Allow",
"Action": [
"iam:ListPolicies"
],
"Resource": "*"
}
]
}
Result

How to install large dependencies on AWS EFS via serverless framework

I understand that we can install dependencies in EFS from an EC2 instance and then set the mount path and PythonPath in AWS lambda so that the lambda has now the path of the dependencies folder.
But is there a way to eliminate the EC2 from this approach and rather install those dependencies from serverless framework?
My scenario is to upload a tensorflow2 dependency(which is >500 MB) to an AWS lambda.
Any leads would be helpful and appreciated.
Yes you can. I am not sure if you have already setup EFS on serverless.
But assuming that this have been done, you can then explicitly tell your serverless lambda project what vpc's to connect to and what EFS IAM roles to use.
I do not have the details of the EFS setup but on my project this would look something like this:
name: aws
profile: abcd
runtime: python3.8
region: us-west-1
vpc:
securityGroupIds:
- sg-065647b2292ad63a2
subnetIds:
- subnet-02ad3xxxxxxxxxxxx
- subnet-02ca2xxxxxxxxxxxx
- subnet-01a14xxxxxxxxxxxx
# Allow RW access to EFS services
iamManagedPolicies:
- "arn:aws:iam::aws:policy/AmazonElasticFileSystemClientReadWriteAccess"
under the functions section just make sure that you define your env vars to point to your libs/code:
functions:
myfunc:
runtime: python3.8
handler: myhandler
environment:
PYTHONPATH: /mnt/efs/lib/python3.8/site-packages
LD_LIBRARY_PATH: /mnt/efs/lib/python3.8/site-packages
Finally in resources:
resources:
extensions:
MyfuncLambdaFunction:
Properties:
FileSystemConfigs:
- Arn: arn:aws:elasticfilesystem:us-west-1:123456789012:access-point/fsap-0012abcde1234ab12
LocalMountPath: /mnt/efs
FYI for tensorflow, you can bring it down to around 60MB using a combination of
lambc/docker-lambda and
tensorflow packaging
but in the long run you will be better off with EFS anyway, just was worth mentioning.

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

How to access private AWS resources in AWS SAM LOCAL when start-api testing

I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET
After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/

Resources