terraform: lambda invocation during destroy - aws-lambda

I have lambda invocation in our terraform-built environment:
data "aws_lambda_invocation" "this" {
count = var.invocation == "true" ? 1 : 0
function_name = aws_lambda_function.this.function_name
input = <<JSON
{
"Name": "Invocation"
}
JSON
}
The problem: the function is invoked not only during creation ("apply") but deletion ("destroy") too. How to invoke it during creation only? I thought about checking environment variables in the lambda (perhaps TF adds name of the process here or something like that) but I hope there's a better way.

Worth checking if you can use the -var 'lambda_xxx=execute' option while running the terraform command to check if the lambda code needs to be executed or not terraform docs
Using that variable lambda_xxx passed in via the command line while executing the command, you can check in the terraform code whether you want to run the lambda code or not.
Below code creates a waf only if the count is 1
resource "aws_waf_rule" "wafrule" {
depends_on = ["aws_waf_ipset.ipset"]
name = "${var.environment}-WAFRule"
metric_name = "${replace(var.environment, "-", "")}WAFRule"
count = "${var.is_waf_enabled == "true" ? 1 : 0}"
predicates {
data_id = "${aws_waf_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
Variable declared in variables.tf file
variable "is_waf_enabled" {
type = "string"
default = "false"
description = "String value to indicate if WAF/API KEY is turned on or off (true/any_value)"
}
When you run the command any value other than true is considered false as we are just checking for string true.
Similarly you can do this for your lambda.

There are better alternative solutions for this problem now, which weren't available at the time the question was asked.
Lambda Invocation Resource in AWS provider: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_invocation
Lambda Based Resource in LambdaBased provider: https://registry.terraform.io/providers/thetradedesk/lambdabased/latest/docs/resources/lambdabased_resource
With the disclaimer that I'm the developer of the latter one: If the underlying problem is managing a resource through lambda functions, the lambda based resource has some good features tailored specifically to accomplish that with the obvious drawback of adding another provider dependency.

Related

Refactor CHEF cookbooks

I have a CHEF recipe for package install on windows hosts. We used to use chef13 client; however we have upgraded to chef16, and we are getting some issues with the cookbook migration.
When I run the existing role as-it-is on a win2k16 chef16 kitchen platform, I get this error,
Chef::Exceptions::ImmutableAttributeModification
------------------------------------------------
Node attributes are read-only when you do not specify which precedence level to set. To set an attribute use code like `node.default["key"] = "value"'
and it points out to this section of the recipe
67: guid = guid.gsub(/{/, '').gsub(/}/, '') # remove {}
68: log "[#{role}][#{recipe_name}][#{app_name}]: GUID for installer #{app_name} is #{guid}"
69>> app_params.store('product_guid', guid) # add GUID to the list of parameters
70: end
Hence, what I understand that chef16 dosen't like the hash.store method.
I have tried some other alternate solutions to update the hash, however they do not work.
Using merge method:-
app_params.merge!({ :product_guid => 'guid' })
Using precedence level explicitly:- ( I have used .default, .override as well, however its the same symptom)
node.force_override!['app_params']['product_guid'] = guid
(reff from : https://github.com/chef/chef/issues/6563)
Note that when using the above methods, i do not get any failures, however the hash dosent get updated at all.
I have added few log statements to show this:-
log "[#{role}][#{recipe_name}][#{app_name}]: app_params.keys is #{app_params.keys}"
log "[#{role}][#{recipe_name}][#{app_name}]: app_params is #{JSON.pretty_generate(app_params)}"
* log[[my-role][my-recipe][my-pkgName]: app_params.keys is ["app_name", "full_path", "action"]] action write
* log[[my-role][my-recipe][my-pkgName]: app_params is {
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install"
}] action write
## ^^ hash keys and hash values (app_params.keys and app_params) BEFORE the force_override method is used
log "[#{role}][#{recipe_name}][#{app_name}]: app_params.keys is #{app_params.keys}"
log "[#{role}][#{recipe_name}][#{app_name}]: app_params is #{JSON.pretty_generate(app_params)}"
* log[[my-role][my-recipe][my-pkgName]: app_params.keys is ["app_name", "full_path", "action"]] action write
* log[[my-role][my-recipe][my-pkgName]: app_params is {
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install"
}] action write
## ^^ hash keys and hash values (app_params.keys and app_params) AFTER the force_override method is used
Note that the hash and hash key values has not been updated it is still,
["app_name", "full_path", "action"];
however we would expect
["app_name", "full_path", "action", "product_guid"]
and the hash itself like,
"app_name": "my-pkgName",
"full_path": "https://myArtifactRepo/.../.../../my-pkgName.msi",
"action": "install",
"product_guid" : XXXXYYY-0WRR-1234-ABCD-3ERDFR234GRT

FUNCTION_REGION env variable in Nodejs is differenrent than GCP set automatically for logs

I programmatically write the logs from the function using such code:
import {Logging} from '#google-cloud/logging';
const logging = new Logging();
const log = logging.log('log-name');
const metadata = {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
};
log.write(
log.entry(metadata, "some message")
);
Later in Logs Explorer I get the log message where labels.region is us1 whereas standard logs that GCP adds, e.g. "Function execution started", contains us-central1 value.
Should not they be the same? Maybe I missed something or if it was done intentionally what is the reason behind it?
process.env.FUNCTION_REGION is supported only in Node 8 runtime. In newer runtimes it was deprecated. More info in documentation.
If your function requires one of the environment variables from an older runtime, you can set the variable when deploying your function.

Receiving error in AWS Secrets manager awscli for: Version "AWSCURRENT" not found when deployed via Terraform

Overview
Create a aws_secretsmanager_secret
Create a aws_secretsmanager_secret_version
Store a uniquely generated string as that above version
Use local-exec provisioner to store the actual secured string using bash
Reference that string using the secretsmanager resource in for example, an RDS instance deployment.
Objective
Keep all plain text strings out of remote-state residing in a S3 bucket
Use AWS Secrets Manager to store these strings
Set once, retrieve by calling the resource in Terraform
Problem
Error: Secrets Manager Secret
"arn:aws:secretsmanager:us-east-1:82374283744:secret:Example-rds-secret-fff42b69-30c1-df50-8e5c-f512464a4a11-pJvC5U"
Version "AWSCURRENT" not found
when running terraform apply
Question
Why isn't it moving the AWSCURRENT version automatically? Am I missing something? Is my bash command wrong? The value does not write to the secret_version, but it does reference it correctly.
Look in main.tf code, which actually performs the command:
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --version-stages AWSCURRENT --region ${var.aws_region} --profile ${var.aws-profile}'"
}
Code
main.tf
data "aws_secretsmanager_secret_version" "rds-secret" {
secret_id = aws_secretsmanager_secret.rds-secret.id
}
data "aws_secretsmanager_secret" "secretsmanager-name" {
arn = aws_secretsmanager_secret.rds-secret.arn
}
resource "random_password" "db_password" {
length = 56
special = true
min_special = 5
override_special = "!#$%^&*()-_=+[]{}<>:?"
keepers = {
pass_version = 1
}
}
resource "random_uuid" "secret-uuid" { }
resource "aws_secretsmanager_secret" "rds-secret" {
name = "DAL-${var.environment}-rds-secret-${random_uuid.secret-uuid.result}"
}
resource "aws_secretsmanager_secret_version" "rds-secret-version" {
secret_id = aws_secretsmanager_secret.rds-secret.id
secret_string = random_password.db_password.result
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --region ${var.aws_region} --profile ${var.aws-profile}'"
}
}
variables.tf
variable "aws-profile" {
description = "Local AWS Profile Name "
type = "string"
}
variable "aws_region" {
description = "aws region"
default="us-east-1"
type = "string"
}
variable "environment" {}
terraform.tfvars
aws_region="us-east-1"
aws-profile="Example-Environment"
environment="dev"
The error likely isn't occuring in your provisioner execution per se, because if you remove the provisioner block the error still occurs on apply--but confusingly only the first time after a destroy.
Removing the data "aws_secretsmanager_secret_version" "rds-secret" block as well "resolves" the error completely.
I'm guessing there is some sort of config delay issue here...but adding a 20 second delay provisioner to the aws_secretsmanager_secret.rds-secret resource block didn't help.
And the value from the data block can be successfully output on subsequent apply runs, so maybe it's not just timing.
Even if you resolve the above more basic issue, it's likely your provisioner will still be confusing things by modifying a resource that Terraform is trying to manage in the same run. I'm not sure there's a way to get around that except perhaps by splitting into two separate operations.
Update:
It turns out that on the first run the data sources are read before the aws_secretsmanager_secret_version resource is created. Just adding depends_on = [aws_secretsmanager_secret_version.rds-secret-version] to the data "aws_secretsmanager_secret_version" block resolves this fully and makes the interpolation for your provisioner work as well. I haven't tested the actual provisioner.
Also you may need to consider this (which I take to not always apply to 0.13):
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

Aws lambda code explanation

Can anybody please explain the working of the below code.
"def lambda_handlerOut(event, context):
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
for record in event['Records']:
print("length of event--"+str(len(event)))
bucket=record['s3']['bucket']['name']
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
Thanks in advance.
Regards,
Eleena Jose
This is a Lambda that receives S3 events - for example a PutObject request that creates a new file.
The method is the standard Python function - take a look at the Lambda Function Handler Docs for more details.
The structure of the event is defined here but basically there are some number of Records that are being iterated through with and, for each record, the bucket and key are being extracted and printed.
So, in more detail (comments above the line they reference):
# standard lambda event handler definition
def lambda_handlerOut(event, context):
# make sure that something was given - likely unneeded
if len(event) > 0:
success=1
print("length of event outside for--"+str(len(event)))
# loop through each record in Records
for record in event['Records']:
print("length of event--"+str(len(event)))
# take a look at the event structure - just extracting parts
bucket=record['s3']['bucket']['name']
# key is the object name - that is, the file
key=record['s3']['object']['key']
print("Bucket--"+bucket)
print("File that triggered this event--"+key)
EDIT
As I linked to above, the data in the event object looks something like:
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
}
]
}
So, as an example, bucket=record['s3']['bucket']['name'] starts by getting the s3 record from the data which leaves:
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
"sequencer":"0055AED6DCD90281E5"
}
}
From there, it gets the bucket stanza:
"bucket":{
"name":"mybucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::mybucket"
}
and lastly, the name:
"name":"mybucket"
This is assigned to the variable bucket which is printed out later. The key (which is the file name in this example) works the same way but gets different parts of the event.
Does that make sense now?

Resources