I have been trying to configure Provisioned Concurrency for my AWS Lambda function. I have been hitting ValidationException again and again. I tried qualifier attribute for both alias version and alias name. While terraform apply it waits for 2 mins and throws the error but it is configured successfully in AWS console with status ready. Below is my configuration.
resource "aws_lambda_alias" "contact_lambda_alias" {
name = module.aws_lambda_function_contact_alias_label.id
function_name = module.terraform_aws_lambda_contact.lambda_arn
function_version = module.terraform_aws_lambda_contact.latest_published_version
}
resource "aws_lambda_provisioned_concurrency_config" "contact_lambda_alias" {
function_name = module.terraform_aws_lambda_contact.lambda_arn
provisioned_concurrent_executions = 1
qualifier = module.terraform_aws_lambda_contact.latest_published_version
timeouts {
create = "30m"
update = "30m"
}
}
I tried with and without timeouts block but still hitting the ValidationException again and again .
This is the error
ValidationException
You should be using alias arn:
function_name = aws_lambda_alias.contact_lambda_alias.arn
Related
I would like to trigger a Lambda once an RDS Replication Task has successfully completed. I have the following Terraform code, which successfully creates all the assets, but my Lambda is not being triggered.
resource "aws_dms_event_subscription" "my_event_subscription" {
enabled = true
event_categories = ["state change"]
name = "my-event-subscription"
sns_topic_arn = aws_sns_topic.my_event_subscription_topic.arn
source_ids = ["my-replication-task"]
source_type = "replication-task"
}
resource "aws_sns_topic" "my_event_subscription_topic" {
name = "my-event-subscription-topic"
}
resource "aws_sns_topic_subscription" "my_event_subscription_topic_subscription" {
topic_arn = aws_sns_topic.my_event_subscription_topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.my_lambda_function.arn
}
resource "aws_sns_topic_policy" "allow_publish" {
arn = aws_sns_topic.my_event_subscription_topic.arn
policy = data.aws_iam_policy_document.allow_dms_and_events_document.json
}
resource "aws_lambda_permission" "allow_sns_invoke" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.my_lambda_function.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.my_event_subscription_topic.arn
}
data "aws_iam_policy_document" "allow_dms_and_events_document" {
statement {
actions = ["SNS:Publish"]
principals {
identifiers = [
"dms.amazonaws.com",
"events.amazonaws.com"
]
type = "Service"
}
resources = [aws_sns_topic.my_event_subscription_topic.arn]
}
}
Am I missing something?
Is event_categories = ["state change"] correct? (This suggests state change is correct.
I'm less concerned right now if the Lambda is triggered for every state change, and not just DMS-EVENT-0079.)
Is there something I can add to get CloudWatch logs from the event subscription, to tell me what's wrong?
You can try giving it a JSON as shared on AWS Documentation.
{
"version":"0",
"id":"11a11b11-222b-333a-44d4-01234a5b67890",
"detail-type":"DMS Replication Task State Change",
"source":"aws.dms",
"account":"0123456789012",
"time":"1970-01-01T00:00:00Z",
"region":"us-east-1",
"resources":[
"arn:aws:dms:us-east-1:012345678901:task:AAAABBBB0CCCCDDDDEEEEE1FFFF2GGG3FFFFFF3"
],
"detail":{
"type":"ReplicationTask",
"category":"StateChange",
"eventType":"REPLICATION_TASK_STARTED",
"eventName":"DMS-EVENT-0069",
"resourceLink":"https://console.aws.amazon.com/dms/v2/home?region=us-east-1#taskDetails/taskName",
"detailMessage":"Replication task started, with flag = fresh start"
}
}
You can check how to give this as JSON in Terraform here
I am attempting to create an Eventbridge that will get notifications from Datadog, and trigger a lambda function to store the notifications to an S3 bucket. This is all going to be done through Terraform.
The following is the code I have written:
###################################
# Eventbridge Integration to Lambda
###################################
data "aws_cloudwatch_event_source" "datadog_event_source" {
name_prefix = var.spog_event_bus_name # aws.partner/datadog.com/my_eventbus
}
resource "aws_cloudwatch_event_bus" "datadog_event_bus" {
name = data.aws_cloudwatch_event_source.datadog_event_source.name
event_source_name = data.aws_cloudwatch_event_source.datadog_event_source.name
}
resource "aws_cloudwatch_event_rule" "spog_cloudwatch_rule" {
name = "spog_cloudwatch_rule"
event_bus_name = aws_cloudwatch_event_bus.datadog_event_bus.name
event_pattern = <<EOF
{
"account": [
"${var.aws_account_id}"
]
}
EOF
}
resource "aws_cloudwatch_event_target" "spog_cloudwatch_event_target" {
rule = aws_cloudwatch_event_rule.spog_cloudwatch_rule.name
target_id = aws_lambda_function.write_datadog_events.function_name
arn = aws_lambda_function.write_datadog_events.arn
}
resource "aws_lambda_permission" "spog_allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.write_datadog_events.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.spog_cloudwatch_rule.arn
}
Here, the write_datadog_events is the lambda function to store the notifications.
The build succeeds, but when I try to apply the plan, I get an error saying that "validationException: EventBus name starting with 'aws.' is not valid". From inspecting the aws console, it seems that the actual eventbridge and rule are created successfully, but the event bus name is not registered properly on the eventbridge. The event bus name only says default, and I thought by changing the event_bus_name value within aws_cloudwatch_event_rule, it would not be default, but I was wrong.
Can anyone help me out with this. The lambda function itself is not wrong (since I ran a test case on it), it seems that the core issue is eventbridge not registering my event bus name. Also, although the rule is generated, the event bus is never generated.
Thanks for your help.
I want to achieve that my lambda function which is based on a docker image in ecr is triggered by a scheduled cloudwatch event.
The problem is that I can not attach the function_name myFunction from the module "lambda_function_container_image" to the aws_lambda_permission.
It works when I have a normal lambda function like, but not with the lambda function from a image URI:
resource "aws_lambda_function" "myFunction" {
function_name = "myFunction"
role = aws_iam_role.lambda_execution_role.arn
handler = "exports.handler"
runtime = "python3.8"
}
I have the following code:
AWS CloudWatch event:
resource "aws_cloudwatch_event_rule" "every_five_minutes" {
name = "every-five-minutes"
description = "Fires every five minutes"
schedule_expression = "rate(5 minutes)"
}
Lambda function based on container image:
module "lambda_function_container_image" {
source = "terraform-aws-modules/lambda/aws"
function_name = "myFunction"
description = "awesome function"
create_package = false
image_uri = "${data.aws_caller_identity.current.account_id}.dkr.ecr${var.aws_region}.amazonaws.com/container_name"
package_type = "Image"
}
Lambda permission:
resource "aws_lambda_permission" "allow_cloudwatch_to_call_myFunction" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.myFunction.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.every_five_minutes.arn
}
I get the following error with the current aws_lambda_permission:
Error message:
Error: Reference to undeclared resource
-> points to function_name in aws_lambda_permission
You need to reference function_name through the module you are using. According to the documentation of terraform-aws-modules/lambda/aws the module has an output called lambda_function_name.
That means, that the following should work for you:
resource "aws_lambda_permission" "allow_cloudwatch_to_call_myFunction" {
[...]
function_name = module.lambda_function_container_image.lambda_function_name
[...]
}
I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.
I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)