AWS multiple Lambda in same project - aws-lambda

My team is in the process of creating a project which follows the serverless architecture, we are using AWS Lambda, NodeJS, and Serverless framework.
The application will be a set of services each one will be handled as a separate function.
I found examples combining multiple functions under the same project then using cloud formation to deploy all at once, but with some defects we don't want, like having resources of different modules deployed for each lambda function,
which will cause some redundancy and if we want to change one file it will not be reflected in all lamda functions as it's local to the hosting lamda function
https://github.com/serverless/examples/tree/master/aws-node-rest-api-with-dynamodb
My question:
do you know the best way to organize a project containing multiple functions, each one has it's separate .yaml and configurations with the ability to deploy all of them when needed or specify selective updated functions to be deployed?

I think I found a good way to do this in a way like the one mentioned here : https://serverless.readme.io/docs/project-structure
I created a service containing some Labmda functions , each one is contained within a separate folder , also I had a lib folder on the root level containing all the common modules that can be used in my Lambda functions .
So my Structure looks like :
Root ---
functions----
function1
function2
libs---
tests--
resources
serverless.yml (root level)
and in my yml file I'm pointing to Lamdas with relative paths like :
functions:
hello1:
handler: functions/function1/hello1.hello
Now I can deploy all functions with one Serverless command , or selectively deploy the changes function specificity
and the deployed Lamda will only contain the required code

Related

Use existing Lambda layer(AWS) in Serverless(framework) project

I am migrating existing lambda functions created using the AWS GUI to a serverless framework project for better version control.
Few functions have layers, now I am trying to add the layer in the config file by directly using the ARN of the layer. This layer was created using the GUI, not using the framework.
functions:
functionName:
handler: handlerFile.handler
layers:
- arn:aws:lambda:...:...:layer:layername:version # Using the ARN directly here, no layer config present in this project
Now when I try to deploy the project, I am getting Module not found. Can't resolve 'sharp', so the layer is not working and unable to access the modules, the sharp library is in the layer. node_modules doesn't exist or is not a directory All the online tutorials and documentation add the layer files manually in the project and deploy a new layer and then use that, is it not possible to use the ARN of an existing layer? It is happening at the webpack compilation step of deployment. This is the webpack config file
module.exports = {
target : 'node',
mode: 'none'
}
The layer uses the folder structure mentioned in the docs, it also works fine in the existing lambda function that I created in the GUI. I am using multiple layers, so I didn't want to add the layer files in the serverless project to keep it clean. The last thing to try would be to manually create layer directories and deploy the layers first using the serverless framework and then it might work(though not sure)
Is it possible to use the ARN of an existing layer directly in the serverless function config given that the layers have already been created using the GUI and not using the framework?
Serverless framework version : 3
Layer type: nodejs 16
Yes, it is possible to use existing layers exactly in the way you added them, you should be able to use both existing layers via ARN and ones created by the Framework. Could you please share the full error and tell us what version of the Framework are you using?
On the side note - module not found might suggest that handler cannot be found. I see you have hanlerFile in config instead of (probably) handlerFile. Maybe this typo is causing the problem here?

AWS cloudformation custom resource to generate config file for another lambda

I want to generate a lambda's config file dynamically (Basically application config) during the AWS stack creation.
Once all the configs are ready then only the particular lambda should be created along with that newly generated file. Can I achieve this using custom resources in AWS cloud formation?
I searched but only with lambda or commandrunner or SNS topics only there. No custom resource to write or modify local files. Could someone provide a sample or guidance to do this ?
Here's some options I see for your use case:
Use a Lambda based CF Custom Resource for your config file logic. Load base files from S3 or checkout from Version Control (git) within the Custom Resource Lambda function.
Execute a custom script within your build/deploy process. E.g. you have a build.sh script that contains the commands to deploy the CF templates, but first you execute another script that creates the config file and places it in the source folder for the lambda function.
Use a Docker Image based Lambda function and include your config file logic in the Dockerfile. You can also use AWS SAM to build the docker image within the CF deployment.
Use AWS CDK and its concept of bundling for lambda functions.

CDK: Deploy microservice endpoints (lambdas) individually

I am writing IAAC with CDK for a microservice that'll be using APIGateway's RestAPI.
I have one stack with all the lambdas and the restApi in one place, I can deploy everything just fine.
Now the problem is when two people are working on different endpoints I would want them to be able to deploy just the endpoint(lambda) they are working on. Currently, when they deploy, CDK deploys all the endpoint from their repo overwriting changes someone might have deployed from their branch.
I would happily share some code but I am not really sure what to share. I think I can use some help with how to structure the code in stacks to achieve what I need.
You have one api gateway shared across two different endpoints from two different repos.
There are couple of ways that I can think of:
Option 1: we need 4 stacks.
Gateway Stack: Api Gateway and Root endpoints.
Endpoint1 stack: Lambda and necessary routes.
Endpoint2 stack: Lambda and necessary routes.
Gateway Deploy stack: Deploy the stage.
Each time a lambda function is changed, deploy its own stack and the deploy stack.
Option 2: we just need 1 stack but deploy lambdas separately.
Single CDK project which deploys everything. Only thing to keep in mind is artifacts for the lambda functions should be taken from S3 bucket location.
Within individual pipelines of each lambda, copy artifacts to same S3 location referenced by lambda in cdk and trigger an update to lambda with a aws cli update-function-configuration as simple of update description with a timestamp or an env variable, just to trigger a new deployment.
This way either we can seamlessly deploy CDK pipeline or individual lambda pipeline
You have 2 options to solve this problem without much work.
First one is to use code to identify who is deploying the stack. If developer 1 is deploying the stack then set some environment variable or parameter to stack. Based on that value, CDK code should compile only 1 of the endpoint repos.
Second option is to not build the repos as part of (CDK) deployment. Use Continuous Delivery (or anything else) which builds the repo code separately and CDK only deploys them.
Based on the context of your project any one strategy should work fine for you. Or share more context if it's not covered until now.
Thanks for your input guys. I have taken the following approach which works for me:
const testLambda = new TestLambda(app, "dev-test-lambda", {
...backendProps.dev,
dynamoDbTable: docStoreDev.store,
});
const restApiDev = new RestApiStack(app, "dev-RestApi", {
...backendProps.dev,
hostedZone: hostedZones.test,
testFunction: testLambda.endpointFunction,
});
Now if a developer just wants to deploy their lambda, they will just deploy the stack for the lambda which won't deploy anything else. Since the restApiStack requires lambda as a dependency, deploying that will also deploy all the other lambdas all at the same time.
One idea as well is for the developer to deploy the pipeline with their code branch name, so they can have a fully fledge environment without worrying about overriding the other developer's lambdas.
Once they're done with the feature, they just merge their code in the main branch and destroy their own pipeline.
It's a common approach :-)

Deploy at once multiple aws netcore lambda functions

Is there any solution with Serverless Framework or with AWS CloudFormation template to publish multiple lambda that are located each in an individual visual studio .proj and in an individual visual studio .sln
Any example I can find contains lambda function in the same class or proj.
I looked into this a while back. From what I remember I think you can deploy multiple lambda functions using the same CloudFormation template in one of two ways.
Manually create separate zip packages for each function, load them into S3, and then reference that package explicitly in the CloudFormation template for each Lambda function.
Combine the files from all the published projects into the same folder (CloudFormation template must be in the same folder too), then use the "aws cloudformation package" command which will create the zip, load it to S3, and then update the template with the S3 path to the package. I'm not sure if you'd even be able to have nested folders for each project due to how Lambda calls the methods.
The issue with #1 is that it's a lot more of a manual process, or a lot more scripting that has to be done.
The issue with #2 is that each Lambda function that's created will contain the files for all functions that are part of that package, even though you're only accessing one Function handler. Also, file conflicts are possible if different versions of the same assembly are used in different projects. There's also a limit on the package size that can be loaded for Lambda functions (50MB compressed, 250MB uncompressed) so that may also be a factor for some people.
Due to these added complexities & potential issues we just decided to have a separate CloudFormation template & stack for each Lambda function.
Lambda limits - see "AWS Lambda Deployment Limits"

How to decrease the size of serverless deploy?

I'm deploying an aws lambda function using serverless framework. My problem is there is a large file (44MB) that is deployed every time I do sls deploy -f any_fn. I've had similar problems when there is a node_modules folder (which can be quite big).
Is there a way to reduce the upload size by uploading the common files only once (and for all functions)? Because right now it keeps zipping and deploying that same binary file again and again though it never changes.
There's no way to do what you propose. AWS Lambda requires you to upload the entire package including all dependencies each time. Everything has to be inside the zip file that is deployed to Lambda.
This was my solution:
Upload the large files to S3 bucket.
Download all S3 files in a function executed under the global scope, not in the scope
of exports.handler so the code will be executed only one time (for
container).
To make sure re-use of the container you should keep
the lambda warm using CloudWatch timer with two simple steps.
This allow you to deploy only the small files.
You can try using lambda layers. All you need to do is create separate serverless project for dependencies management for ex. node_modoles and rest of the services will refer to it (follow docs). This should reduce the deployment or package size of individual lambda significantly.
Use lambda containers and your problems will be solved! Lambda containers have a 10 GB image size limmit! You can add anything you want in there! I've made many express apps with
Serverless http
and lambda containers.
You can also add an efs to your lambda and acess your files from there.
Check this tutorial

Resources