Deploying a lambda with terraform that contains multiple files? - aws-lambda

I have a python lambda I want to deploy that depends on some other python scripts. The lambda itself can't run without those. Looking at the docs, I don't see a way for me to process that entire "folder" as lambda and deploy it that way. I understand I can easily add that specific lambda in my step function later, but I need other scripts to go with it so I could actually run it. I know how to use archive provider to archive the entire folder, could that be helpful in my efforts? Thanks.

You need to use the Terraform archive provider to create a zip of the entire folder. Then reference the zip file as your Lambda function source.

Related

AWS Lambda Layer: No module named 'psycopg2._psycopg'"

I have a library that I downloaded here:
psycopg2
I tried all stakeoverflow suggestions thus far but they didn't work.
I placed it in a folder like this then zipped it to a python.zip folder on windows. The libraries inside are unzipped.
Then I created a lambda layer like this:
I've made sure that the runtime for layer and the function are the same, can someone please assist? Been struggling with this for more than a day.
AWS Lambda uses the Amazon Linux environment, if you are using windows and create a zip file of dependencies it might not work while you run your lambda function. It will be better if you create the layer as a docker env. Please check below:
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/
You need To compile it within a similar architecture as the lambda runtime. I would log into an Amazon Linux EC2 install psycopg there into a specific directory, then copy those files to your Lambda layer on your Windows machine.
Can send more specific steps if you need.

AWS cloudformation custom resource to generate config file for another lambda

I want to generate a lambda's config file dynamically (Basically application config) during the AWS stack creation.
Once all the configs are ready then only the particular lambda should be created along with that newly generated file. Can I achieve this using custom resources in AWS cloud formation?
I searched but only with lambda or commandrunner or SNS topics only there. No custom resource to write or modify local files. Could someone provide a sample or guidance to do this ?
Here's some options I see for your use case:
Use a Lambda based CF Custom Resource for your config file logic. Load base files from S3 or checkout from Version Control (git) within the Custom Resource Lambda function.
Execute a custom script within your build/deploy process. E.g. you have a build.sh script that contains the commands to deploy the CF templates, but first you execute another script that creates the config file and places it in the source folder for the lambda function.
Use a Docker Image based Lambda function and include your config file logic in the Dockerfile. You can also use AWS SAM to build the docker image within the CF deployment.
Use AWS CDK and its concept of bundling for lambda functions.

AWS Codebuild: Monorepo and multiple builds?

I have an AWS CodePipeline which uses CodeBuild as the build step and deploys Lambda functions. This pipeline is triggered upon any commit on the development branch which houses multiple Lambda functions. Right now, since all these Lambdas use the same pipeline, they have the same build job as well.
The problem is, what happens in case one of my Lambdas has a different requirement in the build step (say installing a library). Is there any way to trigger a different build job for a specific Lambda? I am guessing this delves into the age-old issue of Codepipeline unable to deal with monorepo, but any suggestions are welcome.
You could integrate change detection for your lambda functions. The only thing you need is that you need to check out the source separately in the job so you got the .git folder (see: https://forums.aws.amazon.com/thread.jspa?threadID=251732).
Afterwards you can easily check with git which lambda function was actually changed and run your pre-build commands based on the result.

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.
I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).
Problem
So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.
Question
How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
Is there a way in SAM (cmd line options or in the template files) to
have it automatically delete old versions of your package when you
'sam package' upload a new version?
Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)
Slight Complication
In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).
But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.
thanks for any thoughts, ideas and help!
1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket.
In layer it's ContentUri.
2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.
3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .
You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:
AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
--output json --template-stage Processed
To only get the processed template body, you may want to pipe the output again through
jq -r ".TemplateBody"
Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:
MyLambda:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: my-bucket
S3Key: 0c53a7ccb1c1762eaeebd96555d13a20
You can then try to delete s3 objects that are not referenced by the current stack.
There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028
It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.
Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:
[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.
https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286
So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...
Actually, CloudFormation (which SAM is based on) uses S3 as temporary storage only. When you create or update the Lambda function, a copy of the code is made, so you could delete all objects from the bucket and the Lambda function would still work correctly.
Caveat: there are cases where the S3 object may be required, for example to rollback a CloudFormation stack. For example the "CloudFormation stack deployment rollback" scenario (reference):
Deploy version N of a stack
Delete the packaged S3 object that version N uses
Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback

Deploy at once multiple aws netcore lambda functions

Is there any solution with Serverless Framework or with AWS CloudFormation template to publish multiple lambda that are located each in an individual visual studio .proj and in an individual visual studio .sln
Any example I can find contains lambda function in the same class or proj.
I looked into this a while back. From what I remember I think you can deploy multiple lambda functions using the same CloudFormation template in one of two ways.
Manually create separate zip packages for each function, load them into S3, and then reference that package explicitly in the CloudFormation template for each Lambda function.
Combine the files from all the published projects into the same folder (CloudFormation template must be in the same folder too), then use the "aws cloudformation package" command which will create the zip, load it to S3, and then update the template with the S3 path to the package. I'm not sure if you'd even be able to have nested folders for each project due to how Lambda calls the methods.
The issue with #1 is that it's a lot more of a manual process, or a lot more scripting that has to be done.
The issue with #2 is that each Lambda function that's created will contain the files for all functions that are part of that package, even though you're only accessing one Function handler. Also, file conflicts are possible if different versions of the same assembly are used in different projects. There's also a limit on the package size that can be loaded for Lambda functions (50MB compressed, 250MB uncompressed) so that may also be a factor for some people.
Due to these added complexities & potential issues we just decided to have a separate CloudFormation template & stack for each Lambda function.
Lambda limits - see "AWS Lambda Deployment Limits"

Resources