Forced redeployment of Lambda function from S3? - aws-lambda

I have a Lambda function defined in a Cloudformation template with a reference to an S3 bucket and key where I have saved a zipfile containing the Lambda source in the usual fashion. I have a separate CI build process building the Lambda function and dumping it into S3. Now I want the S3 key within the Cloudformation template to be static, I don't want to be changing it for every Lambda commit+rebuild. But Cloudformation thinks the Lambda hasn't changed because the S3 key hasn't changed, even though the contents of the zipfile have been changed.
Must I change the S3 key each time to trigger Lambda redeployment, or is there a way to force Lambda redeployment via Cloudformation whilst retaining the static key ?

You are right, CFT doesn't realise the changes since the S3 key remains same despite the content of it is changed.
As you mentioned, can have the S3 Key different from the previous CFT execution so that the lambda code gets deployed.
You will have to keep the S3 key as CFT parameter
Otherwise, try using SAM Packaging in AWS Code Build and use Code Deploy with Cloud Formation.
Here, the location will not be mentioned as zip, instead takes the code path and builds it and template gets updated with the new deployment package location everytime. (See buildspec.yml in CodeBuild)
References:
https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Hope this helps.

Related

AWS cloudformation custom resource to generate config file for another lambda

I want to generate a lambda's config file dynamically (Basically application config) during the AWS stack creation.
Once all the configs are ready then only the particular lambda should be created along with that newly generated file. Can I achieve this using custom resources in AWS cloud formation?
I searched but only with lambda or commandrunner or SNS topics only there. No custom resource to write or modify local files. Could someone provide a sample or guidance to do this ?
Here's some options I see for your use case:
Use a Lambda based CF Custom Resource for your config file logic. Load base files from S3 or checkout from Version Control (git) within the Custom Resource Lambda function.
Execute a custom script within your build/deploy process. E.g. you have a build.sh script that contains the commands to deploy the CF templates, but first you execute another script that creates the config file and places it in the source folder for the lambda function.
Use a Docker Image based Lambda function and include your config file logic in the Dockerfile. You can also use AWS SAM to build the docker image within the CF deployment.
Use AWS CDK and its concept of bundling for lambda functions.

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.
I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).
Problem
So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.
Question
How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
Is there a way in SAM (cmd line options or in the template files) to
have it automatically delete old versions of your package when you
'sam package' upload a new version?
Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)
Slight Complication
In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).
But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.
thanks for any thoughts, ideas and help!
1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket.
In layer it's ContentUri.
2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.
3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .
You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:
AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
--output json --template-stage Processed
To only get the processed template body, you may want to pipe the output again through
jq -r ".TemplateBody"
Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:
MyLambda:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: my-bucket
S3Key: 0c53a7ccb1c1762eaeebd96555d13a20
You can then try to delete s3 objects that are not referenced by the current stack.
There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028
It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.
Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:
[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.
https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286
So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...
Actually, CloudFormation (which SAM is based on) uses S3 as temporary storage only. When you create or update the Lambda function, a copy of the code is made, so you could delete all objects from the bucket and the Lambda function would still work correctly.
Caveat: there are cases where the S3 object may be required, for example to rollback a CloudFormation stack. For example the "CloudFormation stack deployment rollback" scenario (reference):
Deploy version N of a stack
Delete the packaged S3 object that version N uses
Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback

Deploy at once multiple aws netcore lambda functions

Is there any solution with Serverless Framework or with AWS CloudFormation template to publish multiple lambda that are located each in an individual visual studio .proj and in an individual visual studio .sln
Any example I can find contains lambda function in the same class or proj.
I looked into this a while back. From what I remember I think you can deploy multiple lambda functions using the same CloudFormation template in one of two ways.
Manually create separate zip packages for each function, load them into S3, and then reference that package explicitly in the CloudFormation template for each Lambda function.
Combine the files from all the published projects into the same folder (CloudFormation template must be in the same folder too), then use the "aws cloudformation package" command which will create the zip, load it to S3, and then update the template with the S3 path to the package. I'm not sure if you'd even be able to have nested folders for each project due to how Lambda calls the methods.
The issue with #1 is that it's a lot more of a manual process, or a lot more scripting that has to be done.
The issue with #2 is that each Lambda function that's created will contain the files for all functions that are part of that package, even though you're only accessing one Function handler. Also, file conflicts are possible if different versions of the same assembly are used in different projects. There's also a limit on the package size that can be loaded for Lambda functions (50MB compressed, 250MB uncompressed) so that may also be a factor for some people.
Due to these added complexities & potential issues we just decided to have a separate CloudFormation template & stack for each Lambda function.
Lambda limits - see "AWS Lambda Deployment Limits"

Updating a CloudFormation stack if codebase updates

So I have an existing CloudFormation stack up and running. However, I haven't found a solution for my problem, which is that I want my resources, for example EC2 and Lambda, to have up to date code.
It seems that a CloudFormation stack doesn't update if the template doesn't have any changes. I'm holding my code inside a S3 bucket as a zip-file, but if this file gets changed, CloudFormation doesn't notice it.
Is my best bet creating a git hook script that uses AWS CLI and updates the EC2 and Lambda code or is there some 'elegant' way for CloudFormation to notice these changes?
Create a new lambda function to update your existing lambda and ec2 or call the cloud formation to update them. On your S3, create an object Put event and call that new lambda function. So whenever a new file(zip) is put in s3, your ec2 & lambda gets updated.

How to decrease the size of serverless deploy?

I'm deploying an aws lambda function using serverless framework. My problem is there is a large file (44MB) that is deployed every time I do sls deploy -f any_fn. I've had similar problems when there is a node_modules folder (which can be quite big).
Is there a way to reduce the upload size by uploading the common files only once (and for all functions)? Because right now it keeps zipping and deploying that same binary file again and again though it never changes.
There's no way to do what you propose. AWS Lambda requires you to upload the entire package including all dependencies each time. Everything has to be inside the zip file that is deployed to Lambda.
This was my solution:
Upload the large files to S3 bucket.
Download all S3 files in a function executed under the global scope, not in the scope
of exports.handler so the code will be executed only one time (for
container).
To make sure re-use of the container you should keep
the lambda warm using CloudWatch timer with two simple steps.
This allow you to deploy only the small files.
You can try using lambda layers. All you need to do is create separate serverless project for dependencies management for ex. node_modoles and rest of the services will refer to it (follow docs). This should reduce the deployment or package size of individual lambda significantly.
Use lambda containers and your problems will be solved! Lambda containers have a 10 GB image size limmit! You can add anything you want in there! I've made many express apps with
Serverless http
and lambda containers.
You can also add an efs to your lambda and acess your files from there.
Check this tutorial

Resources