Managing a development codebase in Lambda - aws-lambda

I'm moving to serverless with AWS Lambda. I've gotten to "hello world" so far. I'm used to having a development codebase that I work on, test, and then promote to production. Is there an easy way to do this with Lambda?

I use different AWS accounts for dev, staging, and prod. When deploying the Lambda, I just choose which AWS profile to use so it deploys to the right environment.
If you're using a single AWS account, each deployment of a Lambda function will have a version. You can use those.
If you're using API Gateway with Lambda, you can use API Gateway's "Stages".
You should use a deployment framework such as serverless and that will make things easier for you.

Using frameworks like serverless makes it easy to develop, configure and deploy lambdas, API gateways and other events to AWS. I highly recommend that you adapt serverless framework. This makes it easier to integrate and use serveless deployment with your current CI system.
Now if you have all your environments within one AWS account then you can use stages to represent each env. Using serverless you can simply deploy the lambdas to a different env using --stage (-s) argument.
serverless deploy -s <env/stage name>
You put some smarts in configuring serverless yaml file to pick up configuration files based on your stage (assuming that you will require accessing diff resources like db, s3 buckets etc for diff environments)
If you are using different AWS accounts for prod and nonprod (recommended) then all you need to do is provide an additional argument for the profile.
serverless deploy --profile <prod/nonprod profile> --stage <prod/nonprod stage>

Related

Running/Testing an AWS Serverless API written in Terraform

No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json

AWS PublishVersion together with Serverless

I have a pretty big project that I use Serverless Framework to deploy to AWS (a few lambdas together at a time) using Windows Terminal.
I would do:
serverless deploy -s integration
and it will take all of my lambdas and deploy them. My problem is that I need to use the versioning of AWS, and I don't know how to do it.
After I do the serverless deploy, do I need to open the AWS CLI console and run something like this for each lambda that I already deployed using serverless?
version=$(aws lambda publish-version --function-name test_lambda --description "updated via cli" --region eu-west-1| jq '.Version')
I'm just confused on how to combine the 2 ways of deploying lambdas.
by default, all functions deployed with Serverless Framework are versioned. You can also disable it or turn it on explicitly by setting:
provider:
versionFunctions: true (or false to turn it off)
Please keep in mind that the old versions are not removed automatically, so if you want to keep e.g. only a few previously deployed versions, you might need to use a plugin as https://github.com/claygregory/serverless-prune-plugin

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

Difference between AWS::Serverless::Function and AWS::Lambda::Function

I am developing aws lambda function and I have an option of using one of these two function, but I don't find any good place where I can see the difference between these two. Which one should be used and in which case?
AWS serverless application model i.e. AWS SAM is used to define a serverless application. You need to deploy this application on AWS lambda via s3.
SAM comes in action while testing the AWS Lambda Function locally because it's not easy to deploy and test on AWS Lambda every time you make a code change.
You can configure SAM on your IDE like eclipse, test and finalise the code then deploy it on Lambda.
For more info about sam https://github.com/awslabs/serverless-application-model/blob/master/HOWTO.md

Pattern to deploy AWS Beanstalk in laravel

I have been following this guide:
https://deliciousbrains.com/scaling-laravel-using-aws-elastic-beanstalk-part-3-setting-elastic-beanstalk/
However I am stuck at this point.
Not in terms of something not working, but in how it should be done properly. Which app I should deploy?
Is is the development app that is tested and deployed? Do I create another instance in AWS that will be only used to deploy ready apps? What is the pattern to follow?
At the moment I have local development server which runs on my PC, and also 1 Development instance EC2 on AWS. Do I need more than that on top of Elastic beanstalk?
Please advice me! Thanks!
The following pattern is the one that best fits your need. You're not just looking for a pattern, but an architecture. I'll try to help you with the information you provided.
First it is important that you really understand what Beanstalk is and how it works. See: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/Welcome.html
Answering your question, applications are typically placed in the beanstalk for scalable production, but nothing prevents you from setting up development environments for testing, too.
You do not need to create an instance to deploy, you can deploy from your own local machine, using the console, cli, or api. Look:
Console: https://sa-east-1.console.aws.amazon.com/elasticbeanstalk/home
EB Cli: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/eb-cli3.html
API: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/api/Welcome.html
Having said that, I will cite a very useful scenario in several cases:
You create a beanstalk application from the console or cli and configure the integration with AWS CodeCommit. CodeCommit will prevent you from having to send the whole project to each deploy.
You create an instance of amazon to perform the implantation. This instance has a git repository of your project, it gets committed to the beanstalk environment settings (environment variables for example), and deploy to beanstalk using CodeCommit.
This scenario is very useful for a team project for beanstalk because you can use the deployment instance to hide sensitive details and configure deploy patterns.

Resources