How to structure terraform code to get Lambda ARN after creation? - aws-lambda

This was a previous question I asked: How to get AWS Lambda ARN using Terraform?
This question was answered but turns out didn't actually solve my problem so this is a follow up.
The terraform code I have written provisions a Lambda function:
Root module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
profile = var.aws_profile
}
module "aws_lambda_function" {
source = "./modules/lambda_function"
}
Child module:
resource "aws_lambda_function" "lambda_function" {
function_name = "lambda_function"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
filename = "./task/dist/package.zip"
role = aws_iam_role.lambda_exec.arn
}
resource "aws_iam_role" "lambda_exec" {
name = "aws_iam_lambda"
assume_role_policy = file("policy.json")
}
What I want the user to be able to do to get the Lambda ARN:
terraform output
The problem: I cannot seem to include the following code anywhere in my terraform code as it causes a "ResourceNotFOundException: Function not found..." error.
data "aws_lambda_function" "existing" {
function_name = "lambda_function"
}
output "arn" {
value = data.aws_lambda_function.existing.arn
}
Where or how do I need to include this to be able to get the ARN or is this possible?

You can't lookup the data for a resource you are creating at the same time. You need to output the ARN from the module, and then output it again from the main terraform template.
In your Lambda module:
output "arn" {
value = aws_lambda_function.lambda_function.arn
}
Then in your main file:
output "arn" {
value = module.aws_lambda_function.arn
}

Related

how to refer sns arn from terraform code in a lambda python py file?

my lambda python uses SNS topic arn. But this sns arn id is generated from terraform code. Is there way to refer it somehow in python lambda code?
def lambda_handler(event, context):
try:
#some code
publish_vote(vote, voter)
except:
#some code
return {'statusCode': 200, 'body': '{"status": "success"}'}
def publish_vote(vote, voter):
sns = boto3.client('sns', region_name='us-east-1')
sns.publish(
TopicArn='arn:aws:sns:us-east-1:025416187662:erjan',
Message='""',
MessageAttributes={
"vote": {
"DataType": "String",
"StringValue": vote,
},
"voter": {
"DataType": "String",
"StringValue": voter,
}
}
)
SNS terraform code:
resource "aws_sns_topic" "vote_sns" {
name = "erjan-sns"
}
resource "aws_sns_topic_policy" "vote_sns_access_policy" {
arn = aws_sns_topic.vote_sns.arn
policy = data.aws_iam_policy_document.vote_sns_access_policy.json
}
data "aws_iam_policy_document" "vote_sns_access_policy" {
policy_id = "__default_policy_ID"
statement {
#some stuff code
}
}
output "sns_arn_erjan" {
value = aws_sns_topic.vote_sns.arn
description = "aws full sns topic"
}
For your information:
I see you have already solved this problem, but I have one suggestion.
The lambda function can refer to the topic ARN by putting the ARN as a parameter into Parameter Store with Terraform.
resource "aws_ssm_parameter" "vote_sns" {
name = "sns_arn_erjan"
type = "String"
value = aws_sns_topic.vote_sns.arn
}
aws_ssm_parameter | Resources | hashicorp/aws | Terraform Registry
The lambda function can refer to the parameter stored in Parameter Store using boto3.
get_parameter - SSM — Boto3 Docs 1.26.54 documentation
Your terraform code does not have code for creating the lambda function itself. Are you creating it manually? If yes, then first create that as well using terraform. A basic example is mentioned here
Within the definition, there is an argument for environment. Use that to define your env variables as:
environment {
variables = {
SNS_ARN = aws_sns_topic.vote_sns.arn # Arn from the defined sns resource.
}
}
Then refer the same in your python code as:
import os
SNS_ARN = os.environ.get("SNS_ARN")
...
Alternatively, you could also consider using AWS SAM

Triggering a Lambda once a DMS Replication Task has completed in Terraform

I would like to trigger a Lambda once an RDS Replication Task has successfully completed. I have the following Terraform code, which successfully creates all the assets, but my Lambda is not being triggered.
resource "aws_dms_event_subscription" "my_event_subscription" {
enabled = true
event_categories = ["state change"]
name = "my-event-subscription"
sns_topic_arn = aws_sns_topic.my_event_subscription_topic.arn
source_ids = ["my-replication-task"]
source_type = "replication-task"
}
resource "aws_sns_topic" "my_event_subscription_topic" {
name = "my-event-subscription-topic"
}
resource "aws_sns_topic_subscription" "my_event_subscription_topic_subscription" {
topic_arn = aws_sns_topic.my_event_subscription_topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.my_lambda_function.arn
}
resource "aws_sns_topic_policy" "allow_publish" {
arn = aws_sns_topic.my_event_subscription_topic.arn
policy = data.aws_iam_policy_document.allow_dms_and_events_document.json
}
resource "aws_lambda_permission" "allow_sns_invoke" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.my_lambda_function.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.my_event_subscription_topic.arn
}
data "aws_iam_policy_document" "allow_dms_and_events_document" {
statement {
actions = ["SNS:Publish"]
principals {
identifiers = [
"dms.amazonaws.com",
"events.amazonaws.com"
]
type = "Service"
}
resources = [aws_sns_topic.my_event_subscription_topic.arn]
}
}
Am I missing something?
Is event_categories = ["state change"] correct? (This suggests state change is correct.
I'm less concerned right now if the Lambda is triggered for every state change, and not just DMS-EVENT-0079.)
Is there something I can add to get CloudWatch logs from the event subscription, to tell me what's wrong?
You can try giving it a JSON as shared on AWS Documentation.
{
"version":"0",
"id":"11a11b11-222b-333a-44d4-01234a5b67890",
"detail-type":"DMS Replication Task State Change",
"source":"aws.dms",
"account":"0123456789012",
"time":"1970-01-01T00:00:00Z",
"region":"us-east-1",
"resources":[
"arn:aws:dms:us-east-1:012345678901:task:AAAABBBB0CCCCDDDDEEEEE1FFFF2GGG3FFFFFF3"
],
"detail":{
"type":"ReplicationTask",
"category":"StateChange",
"eventType":"REPLICATION_TASK_STARTED",
"eventName":"DMS-EVENT-0069",
"resourceLink":"https://console.aws.amazon.com/dms/v2/home?region=us-east-1#taskDetails/taskName",
"detailMessage":"Replication task started, with flag = fresh start"
}
}
You can check how to give this as JSON in Terraform here

How can I add a AWS cloudwatch event to a aws_lambda_function which is based on a container image with terraform?

I want to achieve that my lambda function which is based on a docker image in ecr is triggered by a scheduled cloudwatch event.
The problem is that I can not attach the function_name myFunction from the module "lambda_function_container_image" to the aws_lambda_permission.
It works when I have a normal lambda function like, but not with the lambda function from a image URI:
resource "aws_lambda_function" "myFunction" {
function_name = "myFunction"
role = aws_iam_role.lambda_execution_role.arn
handler = "exports.handler"
runtime = "python3.8"
}
I have the following code:
AWS CloudWatch event:
resource "aws_cloudwatch_event_rule" "every_five_minutes" {
name = "every-five-minutes"
description = "Fires every five minutes"
schedule_expression = "rate(5 minutes)"
}
Lambda function based on container image:
module "lambda_function_container_image" {
source = "terraform-aws-modules/lambda/aws"
function_name = "myFunction"
description = "awesome function"
create_package = false
image_uri = "${data.aws_caller_identity.current.account_id}.dkr.ecr${var.aws_region}.amazonaws.com/container_name"
package_type = "Image"
}
Lambda permission:
resource "aws_lambda_permission" "allow_cloudwatch_to_call_myFunction" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.myFunction.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.every_five_minutes.arn
}
I get the following error with the current aws_lambda_permission:
Error message:
Error: Reference to undeclared resource
-> points to function_name in aws_lambda_permission
You need to reference function_name through the module you are using. According to the documentation of terraform-aws-modules/lambda/aws the module has an output called lambda_function_name.
That means, that the following should work for you:
resource "aws_lambda_permission" "allow_cloudwatch_to_call_myFunction" {
[...]
function_name = module.lambda_function_container_image.lambda_function_name
[...]
}

Lambda chaining - Invoke lambda from another lambda using terraform

I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)

Terraform execute script before lambda creation

I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided.
My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function.
Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :)
When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file.
Example:
resource "null_resource" "serverless_execution" {
provisioner "local-exec" {
command = "serverless package ..."
}
}
resource "aws_lambda_function" "update_lambda" {
depends_on = ["null_resource.serverless_execution"]
filename = "${path.module}/path/to/package.zip"
[...]
}
https://www.terraform.io/docs/provisioners/local-exec.html
The answer is already given, but I was looking for a way to install NPM modules on the fly, zip and then deploy Lambda function along with timeout if your lambda function size is large. So here is my finding may help someone else.
#Install NPM module before creating ZIP
resource "null_resource" "npm" {
provisioner "local-exec" {
command = "cd ../lambda-functions/loadbalancer-to-es/ && npm install --prod=only"
}
}
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
depends_on = ["null_resource.npm"]
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
filename = "${data.archive_file.source.output_path}"
function_name = "someprefix-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
# source_code_hash = "${base64sha256(file("/elb-logs-to-elasticsearch.zip"))}"
environment {
variables = {
ELK_ENDPOINT = "someprefix-elk.dns.co"
ELK_INDEX = "test-web-server-"
ELK_REGION = "us-west-2"
ELK_DOCKTYPE = "elb-access-logs"
}
}
}

Resources