How to access Terraform Lambda Variables in Ruby Lambda Function? - ruby

I have terraform code which creates a Lambda function. I then have some ruby code that is the lambda function. I can not figure out, or find any information in how to actually use the variables which are being passed in from the terraform into the lambda. I ultimately just need to know how to use the terraform variables in the ruby lambda function
I have found examples in python and JS. There is little similarities.
Here is my terraform code
resource "aws_lambda_function" "send_sns_lambda" {
filename = "statuslambda.zip"
function_name = "status-page-send-sns"
source_code_hash = "${data.archive_file.status_lambdas.output_base64sha256}"
role = "${aws_iam_role.status_lambda.arn}"
handler = "statusLambda.send_sns"
runtime = "ruby2.5"
vpc_config = {
subnet_ids = ["subnet-xxxx", "subnet-xxxxx"]
security_group_ids = ["sg-xxxxxx"]
}
environment = {
variables = {
status = "Major Outage"
}
}
}
And my Lambda function
def send_sns(event:,context:)
sns = Aws::SNS::Resource.new(region: 'us-xxx-xxx')
topic = sns.topic('arn:aws:sns:us-east-1:xxxxxxxx')
topic.publish({
message: '#{status}'
})
end
The idea is that the status variable in terraform gets passed into the status variable in the ruby code
Here is the python example I have found
import os
def lambda_handler(event, context):
return "{} from Lambda!".format(os.environ['greeting'])

So your question is "how to access environment variables in Ruby"? That would be ENV['status'].

Related

how to refer sns arn from terraform code in a lambda python py file?

my lambda python uses SNS topic arn. But this sns arn id is generated from terraform code. Is there way to refer it somehow in python lambda code?
def lambda_handler(event, context):
try:
#some code
publish_vote(vote, voter)
except:
#some code
return {'statusCode': 200, 'body': '{"status": "success"}'}
def publish_vote(vote, voter):
sns = boto3.client('sns', region_name='us-east-1')
sns.publish(
TopicArn='arn:aws:sns:us-east-1:025416187662:erjan',
Message='""',
MessageAttributes={
"vote": {
"DataType": "String",
"StringValue": vote,
},
"voter": {
"DataType": "String",
"StringValue": voter,
}
}
)
SNS terraform code:
resource "aws_sns_topic" "vote_sns" {
name = "erjan-sns"
}
resource "aws_sns_topic_policy" "vote_sns_access_policy" {
arn = aws_sns_topic.vote_sns.arn
policy = data.aws_iam_policy_document.vote_sns_access_policy.json
}
data "aws_iam_policy_document" "vote_sns_access_policy" {
policy_id = "__default_policy_ID"
statement {
#some stuff code
}
}
output "sns_arn_erjan" {
value = aws_sns_topic.vote_sns.arn
description = "aws full sns topic"
}
For your information:
I see you have already solved this problem, but I have one suggestion.
The lambda function can refer to the topic ARN by putting the ARN as a parameter into Parameter Store with Terraform.
resource "aws_ssm_parameter" "vote_sns" {
name = "sns_arn_erjan"
type = "String"
value = aws_sns_topic.vote_sns.arn
}
aws_ssm_parameter | Resources | hashicorp/aws | Terraform Registry
The lambda function can refer to the parameter stored in Parameter Store using boto3.
get_parameter - SSM — Boto3 Docs 1.26.54 documentation
Your terraform code does not have code for creating the lambda function itself. Are you creating it manually? If yes, then first create that as well using terraform. A basic example is mentioned here
Within the definition, there is an argument for environment. Use that to define your env variables as:
environment {
variables = {
SNS_ARN = aws_sns_topic.vote_sns.arn # Arn from the defined sns resource.
}
}
Then refer the same in your python code as:
import os
SNS_ARN = os.environ.get("SNS_ARN")
...
Alternatively, you could also consider using AWS SAM

Is there a way, inside a terraform script, to retrieve the latest version of a layer?

I have lambdas that reference a layer, this layer is maintained by someone else and when a new
version is released I have to update my terraform code to put the latest version in the arn (here 19).
Is there a way, in the terraform script, to get the latest version and use it?
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
"arn:aws:lambda:eu-central-1:587522145896:layer:my-layer-name:19"
]
}
Thanks.
ps : this means the layer's terraform script is not in mine, it's an other script that I don't have access to.
You can use the aws_lambda_layer_version data source to discover the latest version.
For example:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
data.aws_lambda_layer_version.layer_version.arn
]
}
data "aws_lambda_layer_version" "layer_version" {
layer_name = "my-layer-name"
}

How to structure terraform code to get Lambda ARN after creation?

This was a previous question I asked: How to get AWS Lambda ARN using Terraform?
This question was answered but turns out didn't actually solve my problem so this is a follow up.
The terraform code I have written provisions a Lambda function:
Root module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
profile = var.aws_profile
}
module "aws_lambda_function" {
source = "./modules/lambda_function"
}
Child module:
resource "aws_lambda_function" "lambda_function" {
function_name = "lambda_function"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
filename = "./task/dist/package.zip"
role = aws_iam_role.lambda_exec.arn
}
resource "aws_iam_role" "lambda_exec" {
name = "aws_iam_lambda"
assume_role_policy = file("policy.json")
}
What I want the user to be able to do to get the Lambda ARN:
terraform output
The problem: I cannot seem to include the following code anywhere in my terraform code as it causes a "ResourceNotFOundException: Function not found..." error.
data "aws_lambda_function" "existing" {
function_name = "lambda_function"
}
output "arn" {
value = data.aws_lambda_function.existing.arn
}
Where or how do I need to include this to be able to get the ARN or is this possible?
You can't lookup the data for a resource you are creating at the same time. You need to output the ARN from the module, and then output it again from the main terraform template.
In your Lambda module:
output "arn" {
value = aws_lambda_function.lambda_function.arn
}
Then in your main file:
output "arn" {
value = module.aws_lambda_function.arn
}

Lambda chaining - Invoke lambda from another lambda using terraform

I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)

Terraform execute script before lambda creation

I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided.
My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function.
Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :)
When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file.
Example:
resource "null_resource" "serverless_execution" {
provisioner "local-exec" {
command = "serverless package ..."
}
}
resource "aws_lambda_function" "update_lambda" {
depends_on = ["null_resource.serverless_execution"]
filename = "${path.module}/path/to/package.zip"
[...]
}
https://www.terraform.io/docs/provisioners/local-exec.html
The answer is already given, but I was looking for a way to install NPM modules on the fly, zip and then deploy Lambda function along with timeout if your lambda function size is large. So here is my finding may help someone else.
#Install NPM module before creating ZIP
resource "null_resource" "npm" {
provisioner "local-exec" {
command = "cd ../lambda-functions/loadbalancer-to-es/ && npm install --prod=only"
}
}
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
depends_on = ["null_resource.npm"]
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
filename = "${data.archive_file.source.output_path}"
function_name = "someprefix-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
# source_code_hash = "${base64sha256(file("/elb-logs-to-elasticsearch.zip"))}"
environment {
variables = {
ELK_ENDPOINT = "someprefix-elk.dns.co"
ELK_INDEX = "test-web-server-"
ELK_REGION = "us-west-2"
ELK_DOCKTYPE = "elb-access-logs"
}
}
}

Resources