No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json
Related
As the title suggests I am looking for a way to deploy a terraform file via an AWS lambda function. I would like to deploy this file via a time-based event. This is my first time working with terraform and I cannot seem to find anything pertaining to this specific use case.
I am much more versed in CloudFormation so normally what I would do is use the boto3 library to set up a lambda function that would deploy a CloudFormation stack. Does anyone know how to do this with a terraform file?
In the AWS Lambda service's console, there is a Configuration tab called Database proxies, shown here:
However, in the Terraform registry's entry for an AWS Lambda Function, there does not seem to be a place to define this relationship for my lambda. It's easy enough to add manually after I deploy the Lambda, but for obvious reasons this isn't optimal. It seems like using a DB proxy is a common enough use case for serverless architectures that there would be a way to do this with the resources I've referenced.
What am I missing?
EDIT: As of 9 months ago, this feature was not included in the AWS Provider, but I'm unsure of how to search upcoming nightly or perhaps dev releases of Terraform for this feature...
EDIT EDIT (from my comment below): the RDS, its proxy, the roles they use, the lambdas, and the vpc in which they sit all work as expected. if I go to the above screenshot in the lambdas I am deploying, I can Add database proxy just fine using the proxy I deployed with Terraform. There are no issues with the code, nor any errors. The problem is that having to manually add the Database Proxy to each Lambda I deploy defeats the purpose of using Terraform.
Does anyone have any information on how to run AWS lambda scripts from rundeck? I was looking into doing this to have a central place that certain uses can log into run deck and run the scripts that are relevant to them, as not everyone has aws access.
I found this: https://www.slideshare.net/tetutaro/lambda-and-rundeck-58884982
But I was hoping there might be something more official somewhere and in English :)
A good way to integrate with Lambda is to use AWS CLI on the Rundeck server and call functions using script step or command step on your workflow. Take a look at this.
Also, and similar to this answer, another good way to interact with Lamda is to access it using API (you have two options: using HTTP Workflow Step plugin or via script step on your workflow).
Finally, maybe is a good opportunity to develop some custom plugin focused on AWS Lambda.
I am developing aws lambda function and I have an option of using one of these two function, but I don't find any good place where I can see the difference between these two. Which one should be used and in which case?
AWS serverless application model i.e. AWS SAM is used to define a serverless application. You need to deploy this application on AWS lambda via s3.
SAM comes in action while testing the AWS Lambda Function locally because it's not easy to deploy and test on AWS Lambda every time you make a code change.
You can configure SAM on your IDE like eclipse, test and finalise the code then deploy it on Lambda.
For more info about sam https://github.com/awslabs/serverless-application-model/blob/master/HOWTO.md
I'm moving to serverless with AWS Lambda. I've gotten to "hello world" so far. I'm used to having a development codebase that I work on, test, and then promote to production. Is there an easy way to do this with Lambda?
I use different AWS accounts for dev, staging, and prod. When deploying the Lambda, I just choose which AWS profile to use so it deploys to the right environment.
If you're using a single AWS account, each deployment of a Lambda function will have a version. You can use those.
If you're using API Gateway with Lambda, you can use API Gateway's "Stages".
You should use a deployment framework such as serverless and that will make things easier for you.
Using frameworks like serverless makes it easy to develop, configure and deploy lambdas, API gateways and other events to AWS. I highly recommend that you adapt serverless framework. This makes it easier to integrate and use serveless deployment with your current CI system.
Now if you have all your environments within one AWS account then you can use stages to represent each env. Using serverless you can simply deploy the lambdas to a different env using --stage (-s) argument.
serverless deploy -s <env/stage name>
You put some smarts in configuring serverless yaml file to pick up configuration files based on your stage (assuming that you will require accessing diff resources like db, s3 buckets etc for diff environments)
If you are using different AWS accounts for prod and nonprod (recommended) then all you need to do is provide an additional argument for the profile.
serverless deploy --profile <prod/nonprod profile> --stage <prod/nonprod stage>