Lambda Provisioned Concurrency in CloudFormation - aws-lambda

Note: Please read my question before flagging it as it is different from many other Provisioned Concurrency questions I've seen on SO.
I need to configure provisioned concurrency in one of my existing applications that uses CloudFormation templates with Lambda functions (AWS::Lambda::Function resource, NOT SAM with AWS::Serverless::Function resource).
I did some tests but here's where I am stuck right now:
Provisioned concurrency can only be configured for Alias or Version however...
It can't be configured for Alias that points to the Live function, it must point to a Version
It can't be configured for Version that is the $LATEST
So what's the "right" way to setup Provisioned concurrency?
When deploying CloudFormation template, I can create a Version resource which can have provisioned concurrency configured (shown below). The API Gateway endpoint can directly point to this specific Version instead of the $LATEST version.
However, there is no way to update Version resource. Once it's created, it can only be deleted.
So each time I update my lambda function code, I would have to manually remove the current Version resource from CloudFormation and add a new one so it can create a new Version. This defeats the purpose of having template to deploy.
What are my other options? How can I have a Lambda function ($LATEST, Version or Alias) that has
provisioned concurrency configured
I can make changes to Lambda code without having to modify CloudFormation template each time.
######## LambdaTest Function ########
LambdaTest:
Type: "AWS::Lambda::Function"
DependsOn:
- LambdaRole
- LambdaPolicy
Properties:
FunctionName: "LambdaTest"
Role: !GetAtt LambdaRole.Arn
Code:
S3Bucket: !Ref JarFilesBucketName
S3Key: LambdaTest.jar
Handler: com.example.RnD.LambdaTest::handleRequest
Runtime: "java11"
Timeout: 30
MemorySize: 512
####### LambdaTest Function Version ########
LambdaTestVersion:
Type: "AWS::Lambda::Version"
Properties:
FunctionName: !GetAtt LambdaTest.Arn
Description: "v1"
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 5

While you are correct, we cannot use $LATEST alias per AWS. I also think you are missing some key pieces of information, that SAM usually generates the Lambda, let me try and share what I did.
First, just FYI - SAM generates the Lambda resources that you are seeking, as part of its process. Now if you want to go directly/manually/create the lambda resources, sure you can, then you will have to wire up the AutoPublishAlias: live.
Second part, in the picture (AWS SDK) is the actual solution on deploying live parts.
My solution/workaround is to perform an AutoPublishAlias: live in the function preference
you can just add that per the ref, or follow the steps below and compare/copy/paste what the SAM file gave you vs the AWS Lambda file has
Optional for your help -->
Select Add configuration, with provisioned Concurrency enabled for a specific Lambda function version or alias (but you can’t use $LATEST).
Since you can have different settings for each version of a function. Using an alias, it is easier to enable these settings to the correct version of your function.
Select the alias live ,
note: that
you will have to keep that updated to the latest version using the AWS SAM AutoPublishAlias function preference.
Then go to the Provisioned Concurrency, use something like 500 and Save.
Now -> Provisioned Concurrency configuration is in progress and all execution environments are ready to handle the inbound concurrent requests.

Related

serverless - dynamo streams - how to setup destinationConfig?

i have the following lambda configuration:
MyFunc:
handler: my_handler
timeout: 60
role: myrole
events:
- stream:
type: dynamodb
arn: <<dynamo_db_stream_arn>
startingPosition: LATEST
maximumRetryAttempts: 3
destinations:
onFailure: <sqs_queue_arn>
enabled: True
Yet, when deploying, i don't see that the onFailure is even rendered in the cloudformation template.
i've set it up as said in this documentation:
https://serverless.com/framework/docs/providers/aws/events/streams/
Any idea what i'm missing?
==========================
So, completing Snickers3192's answer - I actually am not sure what's wrong with the configuration above, as serverless should support it, but eventually what i did is created the stream handler in another resource, so basically my serverless looks like that:
functions:
MyFunc:
handler: my_handler
timeout: 60
role: myrole
resources:
Resources:
MySourceMapping:
Type: AWS::Lambda::EventSourceMapping
DependsOn: MyFuncLambdaFunction
Properties:
EventSourceArn: <dynamo_db_stream_arn>
FunctionName: MyFunc
MaximumRetryAttempts: 3
StartingPosition: LATEST
DestinationConfig:
OnFailure:
Destination: <sqs_queue_qrn>
I think you're just missing "arn:"
Here's what worked for us.
maximumRetryAttempts: 10
maximumRecordAgeInSeconds: 300
bisectBatchOnFunctionError: true
destinations:
onFailure:
arn:
Fn::GetAtt:
- fileReducerDeadLetterQueue
- Arn
type: sqs
Even though I like serverless framework, I don't recommend using it for anything other than developing Lambda functions, I wouldn't even use the http event for creating an API gateway. Stick to the unix philosophy do one thing good, that's how I feel serverless should stick to that, not try and become another terraform or something, it isn't.
So create your Lambda functions in serverless and that's it. Do the other stuff else where. If the resource can be managed effectively in Cloudformation AWS::Lambda::EventSourceMapping, then you can use that. If it makes sense to put it at the bottom of the serverless.yml in resources: you can do that, but if not then let it have it's own template.
There is quite a number of permissions needed for setting up your lambda for DynamoDB streams, I wouldn't trust serverless to do that for you. A proper AWS prod setup as well might not let some external tool create iam roles as well.
As soon as you differ the slightest little bit from the serverless default cloudformation template, you'll have problems, probably you are spending hours right now, on a tool which was supposed to save you time, therefore defeating it's purpose. I suggest making more stacks than less, and using conventions when one stack needs a Lambda in another, this is actually more manegable as when one thing fails you can still update other stacks, and swap stacks as you change, you can't do that if you stick it all in a serverless.yml.

Serverless.com / CloudFormation: Properties "Retry attempts", "Maximum age of record" not set on an AWS Lambda EventSourceMapping to a DynamoDB Stream

I'm trying to set the properties "Retry attempts" and "Maximum age of record" on an AWS Lambda EventSourceMapping to a DynamoDB Stream - via serverless.yml for the serverless framework.
When the stack is deployed, they keep the default values, and not the values I set. Help? Thanks
My code:
name-of-serverless-function
handler: src/functions/my.handler
events:
- stream:
type: dynamodb
batchSize: 1
maximumRetryAttempts: 2
maximumRecordAgeInSeconds: 8
arn: properWorkingARN
What is your serverless version?
I am suspicious that you are using serverless version not supporting stream event syntax you are using.
For example, maximumRetryAttempts is supported from version 1.60.0.
serverless usually just ignore the not supported syntax, not returning any error.
Try to check if your serverless version support what you want in here or just upgrade to the latest version and try again.
In addition, you can check cloudformation file serverless create to deploy your project in .serverless/cloudformation-template-update-stack.json. Check if the cloudformation is created as you expected with the file.
---Edit---
I found MaximumRecordAgeInSeconds seems not supported now in serverless. This is opened issue.
I just sent a PR implementing property MaximumRecordAgeInSeconds
for Kinesis and DynamoDB streams: https://github.com/serverless/serverless/pull/7833

Passing secrets to lambda during deployment stage (CodePipeline) with Serverless?

I have a CodePipeline with GitHub as a source, set up. I am trying to, without a success, pass a single secret parameter( in this case a Stripe secret key, currently defined in an .env file -> explaination down below ) to a specific Lambda during a Deployment stage in CodePipeline's execution.
Deployment stage in my case is basically a CodeBuild project that runs the deployment.sh script:
#! /bin/bash
npm install -g serverless#1.60.4
serverless deploy --stage $env -v -r eu-central-1
Explanation:
I've tried doing this with serverless-dotenv-plugin, which serves the purpose when the deployment is done locally, but when it's done trough CodePipeline, it returns an error on lambda's execution, and with a reason:
Since CodePipeline's Source is set to GitHub (.env file is not commited), whenever a change is commited to a git repository, CodePipeline's execution is triggered. By the time it reaches deployment stage, all node modules are installed (serverless-dotenv-plugin along with them) and when serverless deploy --stage $env -v -r eu-central-1 command executes serverless-dotenv-plugin will search for .env file in which my secret is stored, won't find it since there's no .env file because we are out of "local" scope, and when lambda requiring this secret triggers it will throw an error looking like this:
So my question is, is it possible to do it with dotenv/serverless-dotenv-plugin, or should that approach be discarded? Should I maybe use SSM Parameter Store or Secrets Manager? If yes, could someone explain how? :)
So, upon further investigation of this topic I think I have the solution.
SSM Parameter Store vs Secrets Manager is an entirely different topic, but for my purpose, SSM Paremeter Store is a choice that I chose to go along with for this problem. And basically it can be done in 2 ways.
1. Use AWS Parameter Store
Simply by adding a secret in your AWS Parameter Store Console, then referencing the value in your serverless.yml as a Lambda environement variable. Serverless Framework is able to fetch the value from your AWS Parameter Store account on deploy.
provider:
environement:
stripeSecretKey: ${ssm:stripeSecretKey}
Finally, you can reference it in your code just as before:
const stripe = Stripe(process.env.stripeSecretKey);
PROS: This can be used along with a local .env file for both local and remote usage while keeping your Lambda code the same, ie. process.env.stripeSecretKey
CONS: Since the secrets are decrypted and then set as Lambda environment variables on deploy, if you go to your Lambda console, you'll be able to see the secret values in plain text. (Which kinda indicates some security issues)
That brings me to the second way of doing this, which I find more secure and which I ultimately choose:
2. Store in AWS Parameter Store, and decrypt at runtime
To avoid exposing the secrets in plain text in your AWS Lambda Console, you can decrypt them at runtime instead. Here is how:
Add the secrets in your AWS Parameter Store Console just as in the above step.
Change your Lambda code to call the Parameter Store directly and decrypt the value at runtime:
import stripePackage from 'stripe';
const aws = require('aws-sdk');
const ssm = new aws.SSM();
const stripeSecretKey = ssm.getParameter(
{Name: 'stripeSecretKey', WithDecryption: true}
).promise();
const stripe = stripePackage(stripeSecretKey.Parameter.Value);
(Small tip: If your Lambda is defined as async function, make sure to use await keyword before ssm.getParameter(...).promise(); )
PROS: Your secrets are not exposed in plain text at any point.
CONS: Your Lambda code does get a bit more complicated, and there is an added latency since it needs to fetch the value from the store. (But considering it's only one parameter and it's free, it's a good trade-off I guess)
For the conclusion I just want to mention that all this in order to work will require you to tweak your lambda's policy so it can access Systems Manager and your secret that's stored in Parameter Store, but that's easily inspected trough CloudWatch.
Hopefully this helps someone out, happy coding :)

Serverless Framework: CloudFormation Variable Import/Export

I'm using Serverless Framework and have multiple services which are attempting to use the same SQS queue. I can successfully make the resource in the first service but the second one is missing the lambda trigger when deployed to AWS. Hardcoding the ARN ID will successfully make the trigger so I can only assume I have something wrong with my syntax/indentation, but it's very similar to how I'm exporting/importing my API Gateway details and I'm just not seeing it.
I have an SQS Queue set up and exported from my first service like this:
resources:
- Resources:
InitializeAuthenticationQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "InitializeAuthenticationQueue"
- Outputs:
InitializeAuthenticationQueueArnId:
Value:
Fn::GetAtt:
- InitializeAuthenticationQueue
- Arn
Export:
Name: ${self:provider.stage}-InitializeAuthenticationQueueQueueArnId
In my second service I am attempting to use the SQS ARN ID as a trigger for a function, like this:
functions:
authenticationIntialize:
handler: myHandlerFile.myHandler
events:
- sqs:
arn:
'Fn::ImportValue': ${self:provider.stage}-InitializeAuthenticationQueueArnId
I've also tried this to see if I have my indentation wrong:
functions:
authenticationIntialize:
handler: myHandlerFile.myHandler
events:
- sqs:
arn:
'Fn::ImportValue': ${self:provider.stage}-InitializeAuthenticationQueueArnId
Feel like I'm missing something obvious on this one, but I've been stuck on it way too long. Anyone able to help me spot the obvious?
What errors do you get? What does the generated .serverless/cloudformation-template-update-stack.json have for the export and import values?
I usually find it easier to use the internal Serverless CloudFormation property reference. So where you are trying to import the SQS ARN do this:
${cf:STACK_NAME.InitializeAuthenticationQueueArnId}
where STACK_NAME is the name of the CloudFormation stack generated by the Serverless deployment this has the SQS ARN output. Using this method the way you reference the value to import is via the CloudFormation key and not the export name (which is admittedly confusing)

How do you shift/escalate your AWS lambda from one envr to another (eg. dev to prod) using alias?

I am creating a AWS serverless application with SAM. Basically what I would like to achieve is to use API Gateway's different stages (dev/test/prod) to invoke various Lambda functions alias (dev/test/prod).
I am totally stucked, I would like to know what are the strategies people have taken to shift lambda traffics, eg. from LambdaA:dev to LambdaA:prod?
I have tried to use "AutoPublishAlias", but in SAM AutoPublishAlias you can't have more then one alias in a single cloudformation stack, so that makes traffic shifting impossible.
Before using a single stack, I have also used Canary Deployment, it works ok when I separate lambda into multiple envrs (ie. dev-lambaA, test-lambdaA, prod-lambdaA) managed by different cloudformation stack. But I would like to reduce the number of lambda functions by only have lambdas reside in a single stack.
What you can do is add the following to your template.yaml file:
Resources:
ProductionAPI:
Type: AWS::Serverless::Api
Properties:
StageName: PRD
DefinitionUri: ./prdswagger.yaml
DevelopmentAPI:
Type: AWS::Serverless::Api
Properties:
StageName: DEV
DefinitionUri: ./devswagger.yaml
And use the swagger files to create your endpoints. At every endpoint add an x-amazon-apigateway-integration to the correct lambda version that you are targeting.
x-amazon-apigateway-integration:
httpMethod: "POST"
type: aws_proxy
uri: "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:[account_nr]:function:[myfunctionname]:PRD/invocations"
passthroughBehavior: "when_no_match"

Resources