I'm trying to create a Terraform module that will build my JS lambdas, zip them and deploy them. This however proves to be problematic
resource "null_resource" "build_lambda" {
count = length(var.lambdas)
provisioner "local-exec" {
command = "mkdir tmp"
working_dir = path.root
}
provisioner "local-exec" {
command = var.lambdas[count.index].code.build_command
working_dir = var.lambdas[count.index].code.working_dir
}
}
data "archive_file" "lambda_zip" {
count = length(var.lambdas)
type = "zip"
source_dir = var.lambdas[count.index].code.working_dir
output_path = "${path.root}/tmp/${count.index}.zip"
depends_on = [
null_resource.build_lambda
]
}
/*******************************************************
* Lambda definition
*******************************************************/
resource "aws_lambda_function" "lambda" {
count = length(var.lambdas)
filename = data.archive_file.lambda_zip[count.index].output_path
source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
function_name = "${var.application_name}-${var.lambdas[count.index].name}"
description = var.lambdas[count.index].description
handler = var.lambdas[count.index].handler
runtime = var.lambdas[count.index].runtime
role = aws_iam_role.iam_for_lambda.arn
memory_size = var.lambdas[count.index].memory_size
depends_on = [aws_iam_role_policy_attachment.lambda_logs, aws_cloudwatch_log_group.log_group, data.archive_file.lambda_zip]
}
The property source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path) , although technically not obligatory, is necessary, or the existing lambda will never get overriden as Terraform will think that it is still the same version of lambda and will skip the deployment altogether. Unfortunately it looks like the method filebase64sha256 is evaluated before the creation of any resource. This means that there is no zip for the hash calculation and so I get the error
Error: Error in function call
on modules\api-gateway-lambda\main.tf line 35, in resource "aws_lambda_function" "lambda":
35: source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
|----------------
| count.index is 0
| data.archive_file.lambda_zip is tuple with 1 element
Call to function "filebase64sha256" failed: no file exists at tmp\0.zip.
If i manually place a zip in the right location, I can see that the whole thing starts working and the zip eventually gets overriden by a new one, but the hash in this case must come from the previous zip.
What is the right way to execute the whole thing in the right order?
The archive_file data source has its own output_base64sha256 attribute which can give you that same result without asking Terraform to read a file that doesn't exist yet:
source_code_hash = data.archive_file.lambda_zip[count.index].output_base64sha256
The data source will populate this at the same time it creates the file, and because your lambda function depends on the data source it will therefore always be available before the lambda function configuration is evaluated.
Related
I'm trying to write a custom rule for gqlgen. The idea is to run it to generate Go code from a GraphQL schema.
My intended usage is:
gqlgen(
name = "gql-gen-foo",
schemas = ["schemas/schema.graphql"],
visibility = ["//visibility:public"],
)
"name" is the name of the rule, on which I want other rules to depend; "schemas" is the set of input files.
So far I have:
load(
"#io_bazel_rules_go//go:def.bzl",
_go_context = "go_context",
_go_rule = "go_rule",
)
def _gqlgen_impl(ctx):
go = _go_context(ctx)
args = ["run github.com/99designs/gqlgen --config"] + [ctx.attr.config]
ctx.actions.run(
inputs = ctx.attr.schemas,
outputs = [ctx.actions.declare_file(ctx.attr.name)],
arguments = args,
progress_message = "Generating GraphQL models and runtime from %s" % ctx.attr.config,
executable = go.go,
)
_gqlgen = _go_rule(
implementation = _gqlgen_impl,
attrs = {
"config": attr.string(
default = "gqlgen.yml",
doc = "The gqlgen filename",
),
"schemas": attr.label_list(
allow_files = [".graphql"],
doc = "The schema file location",
),
},
executable = True,
)
def gqlgen(**kwargs):
tags = kwargs.get("tags", [])
if "manual" not in tags:
tags.append("manual")
kwargs["tags"] = tags
_gqlgen(**kwargs)
My immediate issue is that Bazel complains that the schemas are not Files:
expected type 'File' for 'inputs' element but got type 'Target' instead
What's the right approach to specify the input files?
Is this the right approach to generate a rule that executes a command?
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
Instead of:
ctx.actions.run(
inputs = ctx.attr.schemas,
Use:
ctx.actions.run(
inputs = ctx.files.schemas,
Is this the right approach to generate a rule that executes a command?
This looks right, as long as gqlgen creates the file with the correct output name (outputs = [ctx.actions.declare_file(ctx.attr.name)]).
generated_go_file = ctx.actions.declare_file(ctx.attr.name + ".go")
# ..
ctx.actions.run(
outputs = [generated_go_file],
args = ["run", "...", "--output", generated_go_file.short_path],
# ..
)
Finally, is it okay to have the output file not exist in the filesystem, but rather be a label on which other rules can depend?
The output file needs to be created, and as long as it's returned at the end of the rule implementation in a DefaultInfo provider, other rules will be able to depend on the file label (e.g. //my/package:foo-gqlgen.go).
AWS Lambda uploading requires the generation of a zip archive of required source code and libraries. For use of NodeJS as the language for Lambda, it may be more typically the case that you want a source file and the node_modules directory to be included in the zip archive. The Terraform archive provider gives a file_archive resource which works well when it can be used. It can't be used when you want more than just 1 file or 1 directory. See feature request . To work around this, I came up with this code below. It executes steps but not in the required sequence. Run it once and it updates the zip file, but doesn't upload it to AWS. I run it again and it uploads to AWS.
# This resource checks the state of the node_modules directory, hoping to determine,
# most of the time, when there was a change in that directory. Output
# is a 'mark' file with that data in it. That file can be hashed to
# trigger updates to zip file creation.
resource "null_resource" "get_directory_mark" {
provisioner "local-exec" {
command = "ls -l node_modules > node_modules.mark; find node_modules -type d -ls >> node_modules.mark"
interpreter = ["bash", "-lc"]
}
triggers = {
always = "${timestamp()}" # will trigger each run - small cost.
}
}
resource "null_resource" "make_zip" {
depends_on = ["null_resource.get_directory_mark"]
provisioner "local-exec" {
command = "zip -r ${var.lambda_zip} ${var.lambda_function_name}.js node_modules"
interpreter = ["bash", "-lc"]
}
triggers = {
source_hash = "${sha1("${file("lambda_process_firewall_updates.js")}")}"
node_modules = "${sha1("${file("node_modules.mark")}")}" # see above
}
}
resource "aws_lambda_function" "lambda_process" {
depends_on = ["null_resource.make_zip"]
filename = "${var.lambda_zip}"
function_name = "${var.lambda_function_name}"
description = "process items"
role = "${aws_iam_role.lambda_process.arn}"
handler = "${var.lambda_function_name}.handler"
runtime = "nodejs8.10"
memory_size = "128"
timeout = "60"
source_code_hash = "${base64sha256(file("lambda_process.zip"))}"
}
Other related discussion includes: this question on code hashing, (see my answer) and this GitHub issue.
I'm relatively new to terraform and I'm trying to iterate over all aws_instances to apply a null_resource. Can you use multiple splats to access all instances, regardless of their names?
The EC2 instances are broken down by three types:
aws_instance.web.* (3 instances)
aws_instance.app.* (3 instances)
aws_instance.db.* (2 instances)
Here's my attempt to apply a null_resource to all eight aws_instances:
resource "null_resource" "install_security_package" {
#count = "${length(aws_instance)}" #terraform error: resource count can't reference variable: aws_instance
#count = "${length(aws_instance.*)}" #terraform error: resource variables must be three parts: TYPE.NAME.ATTR
count = "${length(aws_instance.*.*)}" #terraform error: unknown resource 'aws_instance.*'
connection {
type = "ssh"
host = "${element(aws_instance.*.private_ip, count.index)}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
It is not currently possible to match all resources of a given type. The "splat" syntax, as you've seen, only allows selecting all of the instances created from a particular resource block.
The closest you can get to this with Terraform today is to concatenate together the different resources:
concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)
In the current version of Terraform as of this answer it is necessary to use some of the workarounds shared in github issue #4084 in order to avoid duplicating that complex expression in multiple places. A forthcoming feature called Local Values will make this simpler in the near future, allowing the list to be given an name to be re-used in multiple places:
# Won't work until Terraform PR#15449 is merged and released
locals {
aws_instance_addrs = "${concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)}"
}
resource "null_resource" "install_security_package" {
count = "${length(local.aws_instance_addrs)}"
connection {
type = "ssh"
host = "${local.aws_instance_addrs[count.index]}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
I am using Terraform for continuous deployments of lambda functions. The lambda module creates the function and initial aliases [DEV,QA,PROD]. When a change is made the source_code_hash is updated and Terraform updates the code. The challenge is when I want to update the alias from DEV to QA it updates the entire stack. The code is below. Your help is appreciated.
$ cat main.tf
module "sample" {
source = "./lambda"
name = "sample"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../sample"
alias = "${var.env_name}"
}
$ cat module/main.tf
resource "aws_lambda_function" "lambda" {
filename = "${var.filename}"
function_name = "${var.name}"
role = "${var.role}"
handler = "${var.name}.${var.handler}"
runtime = "${var.runtime}"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
publish = "true"
}
resource "aws_lambda_alias" "lambda_alias" {
count = "2"
name = "${element(var.alias, count.index)}"
#name = "${var.alias}"
description = "${var.name}"
function_name = "${aws_lambda_function.lambda.arn}"
function_version = "${aws_lambda_function.lambda.version}"
}
You need different tfvars and tfstate files for different environments.
Suppose you use latest terraform version.
create an environment folder (for example, env) and set <env>-backend.tf and <env>.tfvars files for each environment
$ cat main.tf
terraform {
required_version = ">= 0.9.1"
backend "s3" {
encrypt = "true"
}
}
$ cat env/dev-backend.tf
bucket = "terraform-<change-to-s3-global-unique-id>"
key = "terraform/dev/terraform.tfstate"
kms_key_id = "xxxx-xxxx-xxxx-xxxx"
$ cat env/dev.tfvars
env_name = dev
Do the same for qa and prod environments.
So you should be fine to run below commands to get same lambda tf codes work in different environments
rm -rf .terraform
export env="dev"
terraform init -backend=true -backend-config=env/${env}-backend.tf
terraform plan -var-file=./env/${env}.tfvars
I am writing a custom type for Puppet and use the following code to copy a module file specified by a puppet url to the user's home directory:
def generate
if self[:source]
uri = URI.parse(self[:source])
path = File.join(Etc.getpwnam(self[:user])[:dir], File.basename(uri.path))
file_opts = {}
file_opts[:name] = File.join(Etc.getpwnam(self[:user])[:dir], File.basename(uri.path))
file_opts[:ensure] = self[:ensure] == :absent ? :absent : :file
file_opts[:source] = self[:source]
file_opts[:owner] = self[:user]
self[:source] = path
Puppet::Type.type(:file).new(file_opts)
end
end
Things are working fine so far. The resource is added to the catalog and created on the agent side. But I have a problem...
How can I specify that this additional file resource must be created before the actual type gets executed? Unfortunatley, I cannot find an example which shows how to specify a dependency on an optional resource that is defined in a generate method.