How to make Terraform archive_file resource pick up changes to source files? - aws-lambda

Using TF 0.7.2 on a Win 10 machine.
I'm trying to set up an edit/upload cycle for development of my lambda functions in AWS, using the new "archive_file" resource introduced in TF 0.7.1
My configuration looks like this:
resource "archive_file" "cloudwatch-sumo-lambda-archive" {
source_file = "${var.lambda_src_dir}/cloudwatch/cloudwatchSumologic.js"
output_path = "${var.lambda_gen_dir}/cloudwatchSumologic.zip"
type = "zip"
}
resource "aws_lambda_function" "cloudwatch-sumo-lambda" {
function_name = "cloudwatch-sumo-lambda"
description = "managed by source project"
filename = "${archive_file.cloudwatch-sumo-lambda-archive.output_path}"
source_code_hash = "${archive_file.cloudwatch-sumo-lambda-archive.output_sha}"
handler = "cloudwatchSumologic.handler"
...
}
This works the first time I run it - TF creates the lambda zip file, uploads it and creates the lambda in AWS.
The problem comes with updating the lambda.
If I edit the cloudwatchSumologic.js file in the above example, TF doesn't appear to know that the source file has changed - it doesn't add the new file to the zip and doesn't upload the new lambda code to AWS.
Am I doing something wrong in my configuration, or is the archive_file resource not meant to be used in this way?

You could be seeing a bug. I'm on 0.7.7 and the issue now is the SHA changes even when you don't make changes. Hashicorp will be updating this resource to a data source in 0.7.8
https://github.com/hashicorp/terraform/pull/8492

Related

Terraform : How lambda to refer s3 in terraform resource `aws_lambda_function`

My lambda node source code is inside the s3 bucket as a zip file
I want that source to be uploaded while executing the aws_lambda_function
resource "aws_lambda_function" "terraform_lambda_func" {
s3_bucket = var.bucket_name
s3_key = "${var.zip_file_name}.zip"
function_name = var.lambdaFunctionName
role = aws_iam_role.okta-iam-v1.arn
handler = "index.handler"
runtime = "nodejs16.x"
}
Wanting it doesn't cut it because that's now the way the relantionship between a lambda and its code works.
What the aws_lambda_function resource does is is to say: "there is a lambda function and its code is there in that S3 bucket".
Because updating the file in the bucket doesn't automatically update the code that lambda, this resource doesn't have a way to reference new file content directly.
To do so, you need an aws_s3_object resource that is able to upload a new file to lambda.
To trigger the actual update of the lambda, you also need to pass the file hash to the aws_lambda_function. Since the aws_s3_object resource expors a source_hash property, you can link them as such.
See How to update aws_lambda_function Terraform resource when ZIP package is changed on S3?

Deploy .sh file in ec2 using terraform

i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.

Why does terraformer not find plugin?

I am new to the world of terraform. I am trying to use terraformer on a GCP project, but keep getting plugin not found:
terraformer import google --resources=gcs,forwardingRules,httpHealthChecks --connect=true --
regions=europe-west1,europe-west4 --projects=myproj
2021/02/12 08:17:03 google importing project myproj region europe-west1
2021/02/12 08:17:04 google importing... gcs
2021/02/12 08:17:05 open \.terraform.d/plugins/windows_amd64: The system cannot find the path specified.
Prior to the above, installed terraformer thus: choco install terraformer I downloaded the Google provider, terraform-provider-google_v3.56.0_x5.exe, placed it in my terraform dir, and did terraform init. I checked %AppData\Roaming\terraform.d\plugins\windows_amd64 and the plugins\windows_amd64 did not exist, so I created it, dropped in the Google exe and redid the init.
My main.tf was:
provider "google" {
project = "{myproj"
region = "europe-west2"
zone = "europe-west2-a"
}
Could it be the chocolatey install causing this issue? I have never used it before but the alternative install looks daunting!
If you install from Chocolatey, it seems it looks for the provider files at C:\.terraform.d\plugins\windows_amd64. If you create those directories and drop the terraform-provider-xxx.exe file in there it will be picked up.
I would also suggest running it with the --verbose flag.
Remember to execute "terraform init"
Once you install the terraformer, check this
Create the versions.tf file
versions.tf example
terraform {
required_providers {
google = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
Execute terraform init
Start the import process, something like:
terraformer import aws --profile [my-aws-profile] --resources="*" --regions=eu-west-1
Where --resources="*" will get everything from aws, but you could specify route53 or other resources
terraformer import aws --profile [my-aws-profile] --resources="route53" --regions=eu-west-1
The daunting instructions worked!
Run git clone <terraformer repo>
Run go mod download
Run go build -v for all providers OR build with one provider go run build/main.go {google,aws,azure,kubernetes and etc}
Run terraform init against an versions.tf file to install the plugins required for your platform. For example, if you need plugins for the google provider, versions.tf should contain:
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
required_version = ">= 0.13"
}
making sure to run that version and not the chocolatey one.
go mod download the flag -x will give output while that command runs, otherwise you might think nothing is happening.

DevOps : AWS Lambda .zip with Terraform

I have written Terraform to create a Lambda function in AWS.
This includes specifying my python code zipped.
Running from Command Line into my tech box, all goes well.
The terraform apply action sees my zip moved into AWS and used to create the lambda.
Key section of code :
resource "aws_lambda_function" "meta_lambda" {
filename = "get_resources.zip"
source_code_hash = filebase64sha256("get_resources.zip")
.....
Now, to get this into other environments, I have to push my Terraform via Azure DevOps.
However, when I try to build in DevOps, I get the following :
Error: Error in function call on main.tf line 140, in resource
"aws_lambda_function" "meta_lambda": 140: source_code_hash =
filebase64sha256("get_resources.zip") Call to function
"filebase64sha256" failed: no file exists at get_resources.zip.
I have a feeling that I am missing a key concept here as I can see the .zip in the repo - so do not understand how it is not found by the build?
Any hints/clues as to what I am doing wrong, gratefully welcome.
Chaps, I'm afraid that I may have just been over my head here - new to terraform & DevOps !
I had a word with our (more) tech folks and they have sorted this.
The reason I think yours if failing is becuase the Tar Terraform step
needs to use a different command line so it gets the zip file included
into the artifacts. tar -cvpf terraform.tar .terraform .tf tfplan
tar --recursion -cvpf terraform.tar --exclude='/.git' --exclude='.gitignore' .
.. it that means anything to you !
Whatever they did, it works !
As there is a bounty on this, I still going to assign it as I am grateful for the input !
Sorry if this was a it of a newbie error.
You can try building your package with terraform AWS lambda build module. As it has been very useful for the processTerraform Lambda build module
According to the document example, in the source_code_hash argument, filebase64sha256 ("get_resources.zip") needs to be enclosed in double quotes.
You can refer to this document for details.

How to achieve multiple files and directories in terraform

data "archive_file" "example" {
type = "zip"
output_path = "${local.dest_dir}/hello_upload.zip"
source_file = "${local.src_dir}/hello.py"
source_dir = "${local.src_dir}/pytz"
source_dir = "${local.src_dir}/pytz-2018.5.dist-info"
}
note that hello.py need to import pytz which is not included in Lambda that is why I want to upload the package.
when I run the above terraform I got error: "source_dir": conflicts with source_file. Then How can I upload both my lambda file hello.py and the package pytz which is a directory?
I had a similar issue when I wanted to add python lib defined by symbolic links (also known as "symlinks"). The archive provider of terraform is buggy on that case.
I walked around it by using a null_ressouce to complete the zip archive :
resource "null_resource" "add_my_lib" {
provisioner "local-exec" {
command = "zip -ur ./archive.zip /path/to/the/lib"
}
}
Then don't forget to add a depends_on attribute to the resource using the archive.
resource "aws_s3_bucket_object" "my_lambda_layer" {
...
depends_on = [null_resource.add_my_lib]
}
You might be able to use an external data source to copy all files to a temporary directory and then archive that directory.

Resources