Terraform : How lambda to refer s3 in terraform resource `aws_lambda_function` - aws-lambda

My lambda node source code is inside the s3 bucket as a zip file
I want that source to be uploaded while executing the aws_lambda_function
resource "aws_lambda_function" "terraform_lambda_func" {
s3_bucket = var.bucket_name
s3_key = "${var.zip_file_name}.zip"
function_name = var.lambdaFunctionName
role = aws_iam_role.okta-iam-v1.arn
handler = "index.handler"
runtime = "nodejs16.x"
}

Wanting it doesn't cut it because that's now the way the relantionship between a lambda and its code works.
What the aws_lambda_function resource does is is to say: "there is a lambda function and its code is there in that S3 bucket".
Because updating the file in the bucket doesn't automatically update the code that lambda, this resource doesn't have a way to reference new file content directly.
To do so, you need an aws_s3_object resource that is able to upload a new file to lambda.
To trigger the actual update of the lambda, you also need to pass the file hash to the aws_lambda_function. Since the aws_s3_object resource expors a source_hash property, you can link them as such.
See How to update aws_lambda_function Terraform resource when ZIP package is changed on S3?

Related

AWS Lambda export Api Gatewat backup to S3

I am trying to configure an lambda function which will export Api backup to S3. But when i try to get an ordinary swagger backup through lambda using this script-
import boto3
client = boto3.client('apigateway')
def lambda_handler(event, context):
response = client.get_export(
restApiId='xtmeuujbycids',
stageName='test',
exportType='swagger',
parameters={
extensions: 'authorizers'
},
accepts='application/json'
)
I am getting this error-
[ERROR] NameError: name 'extensions' is not defined
Please help to resolve this issues.
Could you please check if the documentation has been explicitly published, and if it has been deployed to a stage before it available in the export.
The problem is in:
parameters={
extensions: 'authorizers'
}
You're passing a dictionary, which is ok, but the key should be a string. Since you don't have quotes around extensions, Python is trying to resolve it as a variable with the name extensions which doesn't exist in your code, and so it gives the NameError

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

DevOps : AWS Lambda .zip with Terraform

I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.

Get previous version of code in AWS Lambda

I deploy very simple code to AWS Lambda using this commands:
zip http-endpoint lambda_function.py
aws lambda update-function-code --function-name 'http-endpoint' --zip-file fileb://http-endpoint.zip --region us-east-1
Is it possible to see previous version of code somehow?
You can get data about your function using get-function. This returns a code object containing a location to your function's code artifact. See docs here => scroll down to output:
Location -> (string)
The presigned URL you can use to download the function's .zip file that you previously uploaded. The URL is valid for up to 10 minutes.

How to make Terraform archive_file resource pick up changes to source files?

Using TF 0.7.2 on a Win 10 machine.
I'm trying to set up an edit/upload cycle for development of my lambda functions in AWS, using the new "archive_file" resource introduced in TF 0.7.1
My configuration looks like this:
resource "archive_file" "cloudwatch-sumo-lambda-archive" {
source_file = "${var.lambda_src_dir}/cloudwatch/cloudwatchSumologic.js"
output_path = "${var.lambda_gen_dir}/cloudwatchSumologic.zip"
type = "zip"
}
resource "aws_lambda_function" "cloudwatch-sumo-lambda" {
function_name = "cloudwatch-sumo-lambda"
description = "managed by source project"
filename = "${archive_file.cloudwatch-sumo-lambda-archive.output_path}"
source_code_hash = "${archive_file.cloudwatch-sumo-lambda-archive.output_sha}"
handler = "cloudwatchSumologic.handler"
...
}
This works the first time I run it - TF creates the lambda zip file, uploads it and creates the lambda in AWS.
The problem comes with updating the lambda.
If I edit the cloudwatchSumologic.js file in the above example, TF doesn't appear to know that the source file has changed - it doesn't add the new file to the zip and doesn't upload the new lambda code to AWS.
Am I doing something wrong in my configuration, or is the archive_file resource not meant to be used in this way?
You could be seeing a bug. I'm on 0.7.7 and the issue now is the SHA changes even when you don't make changes. Hashicorp will be updating this resource to a data source in 0.7.8
https://github.com/hashicorp/terraform/pull/8492

Resources