Can SAM create the s3 bucket to store the lambda function code? - aws-lambda

Perhaps this is more than one question. Tried to sign up to the SAM Slack channel, but no success.
I am trying out SAM to build a serverless app. I am used to having a Cloudformation template to describe all the resources needed. Now I am confused as to why SAM's cli asks me to pass s3 bucket where to upload lambda function code. I would normally expect the creation of s3 bucket (with a random name) to be part of the Cloudformation template execution. Is SAM an extension over Cloudformation or is it not?
In my template.yaml I have something like this:
Resources:
SrcBucket:
Type: AWS::S3::Bucket
MyFunction:
Type: AWS::Serverless::Function
Properties:
Timeout: 3
Runtime: python3.7
Handler: my.lambda_handler
CodeUri: my/
Events:
ShopifyInstall:
Type: Api
Properties:
Path: /
Method: get
How do I reference the SrcBucket in CodeUri?

Well unfortunately no.
The deployment of the SAM template is in two parts one is the package command which basically constructs the zip file and needs an s3 bucket to upload this to.
And the deploy command which simply deploys your packaged application just like what cloudformation would do.
I usually have a small bash script with multiple cloudformation stacks one is the helper stack which creates this bucket (And also adds the name in the outputs) and then fetch the name and pass it along to all the other stacks
#Create the Helper stack
echo "---------Create Helper stack ---------"
aws cloudformation deploy --profile ${profile} --stack-name $helperStack --
region ${region} --template-file deployment-helper.yaml
serverlessCodeBucketName="$(aws cloudformation --region ${region} --profile
${profile} describe-stacks --stack-name $helperStack --query
'Stacks[0].Outputs[?OutputKey==`CodeBucketName`].OutputValue' --output text)"
aws cloudformation package --profile ${profile} --region ${region} --
template-file template.yaml --output -
template-file serverless-output.yaml --s3-bucket
${serverlessCodeBucketName}
aws cloudformation deploy --profile ${profile} --stack-name
${applicationStack} --region ${region} --template-file
serverless-output.yaml --capabilities
CAPABILITY_IAM

Related

bucket not exists with localstack and s3

`Im trying run localstack via docker-compose to create S3 with Golang
im using docker-compose:
and connect S3:
and create bucket with : aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket tags
but, im received error "Bucket not exists" all time!
help pls
`
Hi – Please update your Docker Compose configuration to accurately reflect the latest updates:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
You can now create an S3 bucket using the AWS CLI:
aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket tags
If you run into troubles, please check if LocalStack is running properly:
curl localhost:4566/_localstack/health

SSM parameters SAM local aws set up: unable to set properties to work in local

people.
I have been trying to set up a project to test my lambdas in a local environment. We are using sam to mimic AWS resources and everything works pretty fine with one exception: for business reasons, I had to include SSM parameters. When we try to read the parameters using SAM with sam local start-lambda ... whatever, the lambda code is not able to recover the parameters. I know that AWS credentials are fine as I can connect to AWS, but we don't want to use real AWS services to do that, we want to pass, set, define; whatever you want to call it is fine, SSM parameters in my local environment and then use them with sam local start-lambda whatever, without AWS connection therefore we can use parameters for testing only in local.
I have read the following post, How to access SSM Parameter Store from SAM lambda local in node
And this issue in github:
https://github.com/aws/aws-sam-cli/issues/616
It is mentioned that the way to do it is by using --env-vars but it is not working so far.
This is my template.
Parameters:
IdentityNameParameter:
Type: AWS::SSM::Parameter::Value<String>
Resources:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: handler::handleRequest
CodeUri: lambdaFunction
Runtime: java11
Timeout: 40
Environment:
Variables:
AWS_ACCESS_KEY_ID: "keyid"
AWS_SECRET_ACCESS_KEY: "accesskey"
AWS_DEFAULT_REGION: "us-west-1"
AWS_REGION: "us-west-1"
This is what I use to start the lambda
sam local start-lambda --host 0.0.0.0 -d 5859 --docker-volume-basedir /folderWithClasses --container-host host.docker.internal --debug --env-vars env.json
This is the env.json:
{
"Parameters": {
"IdentityNameParameter": "admin"
}
}
I guess there is no support to do this. If so, what's the point of having SAM to test it in local if you need AWS to actually test?
Any clue?

Invalid Layer Arn Error when using ARN value from SSM parameters

Lambda layer ARN is stored in SSM parameter and need to access the value of this parameter to put as an Layer arn while defining a function and attaching a layer to it.
ERROR: SayHelloLayerARN is an Invalid Layer Arn.
Parameter Name in Parameter Store: SayHelloLayerARN
Here is the SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
CREATE-WITH-SSM
Sample SAM Template for CREATE-WITH-SSM
Parameters:
HelloLayerARN:
Type: AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.8
Environment:
Variables:
LAYER_NAME: !Ref HelloLayerARN
Layers:
- !Ref HelloLayerARN
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
It seems, SAM doesn't resolve SSM parameters.
Please try using --parameter-overrides option
Example: sam build --parameter-overrides HelloLayerARN=LambdaLayerARN
Note: You must change the HelloLayerARN Type to normal String, other sam deploy fails with SSM parameter resolving error.
Parameters:
HelloLayerARN:
Type: String #AWS::SSM::Parameter::Value<String>
Default: SayHelloLayerARN
Description: Layer ARN from SSM
Please refer the known issue: https://github.com/aws/aws-sam-cli/issues/1069
The --parameter-overrides solution mentioned by #user17589914 works for build and deploy but it does not work for local invoke (I will be very happy to be proven wrong). Below are some details on my findings and workaround:
The layers specific issue aside, there is an open issue with inconsistency between --env-vars vs --parameter-overrides for build, deploy and local invoke, just FYI.
https://github.com/aws/aws-sam-cli/issues/1163
So in general, I am using --env-vars for local invoke with dev parameters defined in a json file. And for build & deploy, I use the --parameter-overrides with parameters for multiple envs defined in a samconfig.toml.
For the Layer ARN reference not working issue, I have not been able to get local invoke to work by passing the ARN as a parameter with either --env-vars or --parameter-overrides. So, I ended by leaving the layer ARN hard-coded in my sam template.
Looking forward to see if I am missing something and someone has this working for local invoke as well.
Do you try other SAM CLI version ?
I got the same error message for SAM CLI version 1.21.1 but not 1.29.0. I POC via SAM container image public.ecr.aws/sam/build-nodejs14.x on local machine (macOS) :
#!/bin/sh
# SAM_VERSION=1.21.1
SAM_VERSION=1.29.0
CONTAINER=public.ecr.aws/sam/build-nodejs14.x:$SAM_VERSION
EXEC_DIR=/path/to/sam
TARGET_FUNCTION=YOUR_FUNCTION_NAME
docker run \
--rm -it $CONTAINER \
sam --version
docker run \
--env SAM_CLI_TELEMETRY=0 \
--env-file $EXEC_DIR/.env \
-v $EXEC_DIR/functions:/functions \
--rm -it $CONTAINER \
sh -c "cd /functions/$TARGET_FUNCTION && sam build"
SAM CLI requires AWS credential, you need to provide environment variables below, ex. my .env file :
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_DEFAULT_REGION=YOUR_TARGET_REGION
AWS_REGION=YOUR_TARGET_REGION
and don't forget create IAM that allows iam:ListPolicy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUsersToPerformUserActions",
"Effect": "Allow",
"Action": [
"iam:ListPolicies"
],
"Resource": "*"
}
]
}
Result

How to run AWS Lambda dotnet on localstack

The DotNet3.1 AWS Lambda
I have created an AWS Lambda solution with C# DotNet3.1 using the Amazon template
dotnet new serverless.AspNetCoreWebAPI -n MyDotNet.Lambda.Service
this creates a lambda function whose handler is MyDotNet.Lambda.Service::MyDotNet.Lambda.Service.LambdaEntryPoint::FunctionHandlerAsync plus some serverless.template file and aws-lambda-tools-defaults.json
The standard way to deploy the DotNet3.1 AWS Lambda
The standard way to deploy it would be to install Amazon.Lambda.Tools
dotnet tool update -g Amazon.Lambda.Tools
and then run
dotnet lambda deploy-serverless --profile myawsprofile
Notice that the profile is optional, but I've got AWS configured under that profile.
This will prompt for CloudFormation Stack Name (e.g: foo) and a S3 bucket (e.g: my-bucket)
and will deploy it to the "real" AWS configured under the custom profile myawsprofile
LocalStack running as a docker container
All good so far. Now I have just discovered https://github.com/localstack/localstack which is a great way to run AWS platform locally, so I use docker-compose file localstack-compose.yml to spin up the container
version: '3.8'
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack-full
network_mode: bridge
ports:
- "4566:4566"
- "4571:4571"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Like this:
docker-compose -f localstack-compose.yml up
And it runs all under the port 4566
AWS Local
In order to run AWS CLI with LocalStack I install the wrapper https://github.com/localstack/awscli-local so that I can do things like
awslocal s3 ls
How do I deploy the AWS Lambda locally?
I am too new to understand most of the tutorials I've followed. Some of them refer to serverless framework, but I am just using localstack as a docker container. I've also installed SAM CLI in case it's needed (although I don't yet understand what's for)
I've tried deploying it to the local stack with
dotnet lambda deploy-serverless --profile default
which would be the equivalent, I think, but I get
Error uploading to MyDotNet.Lambda.Service/AspNetCoreFunction-CodeUri-Or-ImageUri-637509113851513062-637509113886357582.zip in bucket foo: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint
although I have a bucket s3://foo in localstack
It's really complicated to find an example I can follow with my basic level of AWS knowledge. Is there any instructions I've missed, or a nice link/tutorial on how to achieve what I want step by step? Thanks
UPDATE 1 (11/3/2021)
I've tried step by step with a web api project created with Amazon template https://gitlab.com/sunnyatticsoftware/sandbox/localstack-sandbox/-/tree/master/02-lambda-dotnet-webapi
but I find problems.
Steps:
First I create role for lambda execution
awslocal iam create-role --role-name lambda-dotnet-webapi-ex --assume-role-policy-document file://trust-policy.json
Attach policy to the role to grant permission for execution
awslocal iam attach-role-policy --role-name lambda-dotnet-webapi-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Create Lambda function
awslocal lambda create-function --function-name lambda-dotnet-webapi-function --zip-file fileb://function.zip --handler Sample.Lambda.DotNet.WebApi::Sample.Lambda.DotNet.WebApi.LambdaEntryPoint::FunctionHandlerAsync --runtime dotnetcore3.1 --role arn:aws:iam::000000000000:role/lambda-dotnet-webapi-ex
Invoke the AWS Lambda using the base64 utility to decode the logs
awslocal lambda invoke --function-name lambda-dotnet-webapi-function out --log-type Tail --query 'LogResult' --output text | base64 -d
It returns:
iptables v1.4.21: can't initialize iptables table `nat': iptables who? (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
[Information] Microsoft.Hosting.Lifetime: Application started. Press Ctrl+C to shut down.
[Information] Microsoft.Hosting.Lifetime: Hosting environment: Production
[Information] Microsoft.Hosting.Lifetime: Content root path: /var/task
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /var/task
START RequestId: a5eb1d2d-d908-15f6-ace3-d4d0e01a0066 Version: $LATEST
Could not load file or assembly 'System.IO.Pipelines, Version=4.0.2.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'. The system cannot find the file specified.
: FileNotFoundException
at Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction`2.FunctionHandlerAsync(TREQUEST request, ILambdaContext lambdaContext)
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction`2.FunctionHandlerAsync(TREQUEST request, ILambdaContext lambdaContext)
at lambda_method(Closure , Stream , Stream , LambdaContextInternal )
END RequestId: a5eb1d2d-d908-15f6-ace3-d4d0e01a0066
REPORT RequestId: a5eb1d2d-d908-15f6-ace3-d4d0e01a0066 Init Duration: 2305.87 ms Duration: 33.29 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 0 MB
Starting daemons...
ImportError: No module named site
Does anybody have a working example?
UPDATE 2
The interesting thing is that I've tried against the REAL AWS (a different profile in AWS credentials) and I also get an error, but it's different.
Create role
aws iam create-role --role-name lambda-dotnet-webapi-ex --assume-role-policy-document file://trust-policy.json --profile diegosasw
List roles
aws iam list-roles --profile diegosasw
Attach policy
aws iam attach-role-policy --role-name lambda-dotnet-webapi-ex
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole --profile diegosasw
Create lambda
aws lambda create-function --function-name lambda-dotnet-webap
i-function --zip-file fileb://function.zip --handler Sample.Lambda.DotNet.WebApi::Sample.Lambda.DotNet.WebApi.LambdaEntryPoint::FunctionHa
ndlerAsync --runtime dotnetcore3.1 --role arn:aws:iam::308309238958:role/lambda-dotnet-webapi-ex --profile diegosasw
Invoke
aws lambda invoke --function-name lambda-dotnet-webapi-function --profile diegosasw out --log-type Tail --query 'LogResult' --output text | base64 -d
It returns
START RequestId: 7d77489f-869b-4e4d-87a0-ac800d71eb2d Version: $LATEST
warn: Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction[0]
Request does not contain domain name information but is derived from APIGatewayProxyFunction.
[Warning] Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction: Request does not contain domain name information but is derived from APIGatewayProxyFunction.
Object reference not set to an instance of an object.: NullReferenceException
at Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction.MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
at Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction`2.FunctionHandlerAsync(TREQUEST request, ILambdaContext lambdaContext)
at lambda_method(Closure , Stream , Stream , LambdaContextInternal )
END RequestId: 7d77489f-869b-4e4d-87a0-ac800d71eb2d
REPORT RequestId: 7d77489f-869b-4e4d-87a0-ac800d71eb2d Duration: 755.06 ms Billed Duration: 756 ms Memory Size: 128 MB Max Memory Used: 87 MB Init Duration: 462.09 ms
I got it working both for AWS and LocalStack (i.e: awslocal). Here are the steps using just AWS CLI. Here's the repo sample https://gitlab.com/sunnyatticsoftware/sandbox/localstack-sandbox/-/tree/master/03-lambda-dotnet-empty
Create AWS lambda in localstack with AWS CLI
AWS
Create empty sample C# lambda function from an Amazon template
dotnet new lambda.EmptyFunction -n Sample.Lambda.DotNet
Compile and publish
dotnet build
dotnet publish -c Release -o publish
Zip lambda files
cd publish
zip -r ../function.zip *
Create role
aws --profile diegosasw iam create-role --role-name lambda-dotnet-ex --assume-role-policy-document '{"Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
Attach AWSLambdaBasicExecutionRole policy to role
aws --profile diegosasw iam attach-role-policy --role-name lambda-dotnet-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Create lambda
aws --profile diegosasw lambda create-function --function-name lambda-dotnet-function --zip-file fileb://function.zip --handler Sample.Lambda.DotNet::Sample.Lambda.DotNet.Function::FunctionHandler --runtime dotnetcore3.1 --role arn:aws:iam::308309238958:role/lambda-dotnet-ex
Invoke lambda
aws --profile diegosasw lambda invoke --function-name lambda-dotnet-function --payload "\"Just Checking If Everything is OK\"" response.json --log-type Tail
LocalStack
For localStack is similar, but replacing aws with awslocal, of course, and I don't specify any profile but you can use --profile default or whichever you have your .aws/credentials at
Create role
awslocal iam create-role --role-name lambda-dotnet-ex --assume-role-policy-document '{"Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
Attach AWSLambdaBasicExecutionRole policy to role
awslocal iam attach-role-policy --role-name lambda-dotnet-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Create lambda
awslocal lambda create-function --function-name lambda-dotnet-function --zip-file fileb://function.zip --handler Sample.Lambda.DotNet::Sample.Lambda.DotNet.Function::FunctionHandler --runtime dotnetcore3.1 --role arn:aws:iam::000000000000:role/lambda-dotnet-ex
Invoke lambda in localstack passing a json payload (string is valid JSON)
awslocal lambda invoke --function-name lambda-dotnet-function --payload "\"Just Checking If Everything is OK again\"" response.json --log-type Tail
View functions
awslocal lambda list-functions
Delete function
awslocal lambda delete-function --function-name lambda-dotnet-function
Dotnet tool
With dotnet tool, the equivalent is
dotnet lambda invoke-function lambda-dotnet-function --payload "Just Checking If Everything is OK" --profile diegosasw

Using If statement with shell commands in jenkins Declarative pipeline cloudformation

I am integrating aws cloudformation into my jenkins pipeline. I want execute a
$ aws cloudformation describe-stacks --stack-name dev-nics-proxyservlet-svc --region us-west-2
command to see if I have a stack out there with the name I am looking for. If the command finds that the stack exists, I want to delete the stack:
$ aws cloudformation delete-stack --stack-name dev-nics-proxyservlet-svc
But if the stack doesnt exists, I want to create the stack:
aws cloudformation create-stack --stack-name dev-nics-proxyservlet-svc --region us-west-2 --template-body file://dev-nics-proxyservlet-cluster.yml --parameters file://dev-nics-proxyservlet-svc-param.json --capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM"
How can I can write this shell comman in a declarative multibranch jenkins pipeline? Any help is appreciated.
Thanks!
I think something along these lines should work:
if aws cloudformation describe-stacks --stack-name dev-nics-proxyservlet-svc --region us-west-2 &>/dev/null
then
aws cloudformation delete-stack --stack-name dev-nics-proxyservlet-svc
else
aws cloudformation create-stack --stack-name dev-nics-proxyservlet-svc --region us-west-2 --template-body file://dev-nics-proxyservlet-cluster.yml --parameters file://dev-nics-proxyservlet-svc-param.json --capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM"
fi
The if works by checiking exit code of aws cloudformation describe-stacks. If its 0, then stack exists, if not 0, then it does not exist.

Resources