Invoke Lambda with terraform - permissions - aws-lambda

I want to invoke an AWS function that I am creating using Terraform (both the deployment and the invocation).
The Terraform is assuming role in another account.
provider.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.33.0"
}
}
}
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::123456789101:role/AssumedRole"
session_name = "TF"
}
}
The invocation happening using TF Data:
data "aws_lambda_invocation" "start-execution" {
function_name = aws_lambda_function.start-execution-lambda.function_name
depends_on = [aws_lambda_function.start-execution-lambda]
input = <<JSON
{
"key1": "value1",
"key2": "value2"
}
JSON
}
and the assumed role has lambda:* permission.
unfortunately, there is a permission that I am missing, because when I am deploying the TF I get
╷
│ Error: AccessDeniedException:
│ status code: 403, request id:
│
│ with data.aws_lambda_invocation.start-execution,
now when I grant administrator access to the assumed role I can invoke the lambda, it seems that there is another service (not lambda) that Terraform is using to invoke a Lambda function.

I had a lambda:* permission on my lambda resource
Action:
- lambda:*
Resource:
- arn:aws:lambda:Region:AccountID:function:My-function
when the invoke action makes an action on the lambda version and need a qualified permisssion:
Action:
- lambda:*
Resource:
- arn:aws:lambda:Region:AccountID:function:My-function:*

Related

Error in creating EC2 resource using Terraform

I'm trying to create an EC2 using terraform (I'm new to the area). I'm following the tutorial, but I think there's something wrong with the user I created in AWS.
Steps I followed:
Create user in AWS
a) I added to a group that has the AmazonEC2FullAccess policy
b) I created the credentials to use the AWS Cli
I used the file suggested by the Terraform tutorial
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
I ran the aws configure command and put the key and secret key values.
I ran terraform init and it worked
When I run the terraform plan, the error appears.
Error: configuring Terraform AWS Provider: error validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: xxxxxxxxxxxxxxxx, api error InvalidClientTokenId: The security token included in the request is invalid.
Any idea?
I missed the parameter "profile" in main.tf. So, the new file is
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
profile = "default"
}
resource "aws_instance" "app_server" {
ami = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
Its works now!

error while spinning ec2 instance with key using Terraform script

I want to provision an ec2 instance with a key and run a script inside an ec2-instance.
filename instance.tf
resource "aws_key_pair" "mykey" {
key_name = "terraform-nverginia"
public_key = "${file ("${var.PATH_TO_PUBLIC_KEY}")}"
}
resource "aws_instance" "demo" {
ami = "${lookup (var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
tags = {
Name = "T-instance"
}
provisioner "file" { // copying file from local to remote server
source = "deployscript.sh"
destination = "/home/ec2-user/deploy.sh" //check if both the file names are same or not.
}
provisioner "remote-exec" { // executing script to do some deployment in the server.
inline = [
"chmod +x /home/ec2-user/deploy.sh",
"sudo /home/ec2-user/deploy.sh"
]
}
connection {
type = "ssh" // To connect to the instance
user = "${var.INSTANCE_USERNAME}"
host = "122.171.19.4" // My personal laptop's ip address
private_key = "${file ("${var.PATH_TO_PRIVATE_KEY}")}"
}
} // end of resource aws_instance
//-------------------------------------------------
filename: provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
}
filename vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION"{
default = "us-east-1"
}
variable "AMIS" {
type = map
default = {
us-east-1 = "ami-0574da719dca65348"
us-east-2 = "ami-0a606d8395a538502"
}
}
variable "PATH_TO_PRIVATE_KEY" {
default = "terraform-nverginia"
}
variable "PATH_TO_PUBLIC_KEY"{
default = "mykey.pub"
}
variable "INSTANCE_USERNAME"{
default = "ec2-user"
}
filename = terraform.tfvars
AWS_ACCESS_KEY = "<Access key>"
AWS_SECRET_KEY = "<Secret key>"
Error:
PS D:\\Rajiv\\DevOps-Practice\\Terraform\\demo-2\> terraform plan
╷
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
│ Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: G
etCallerIdentity, https response error StatusCode: 403, RequestID: 594b6dab-e087-4678-8c57-63a65c3d3d41, api error InvalidClientTokenId: The se
curity token included in the request is invalid.
│
│ with provider\["registry.terraform.io/hashicorp/aws"\],
│ on \<empty\> line 0:
│ (source code not available)
I am expecting a ec2 instance to be created and the script should be run.
Providers are plugins which helps Terraform to interact with specific cloud services. You must declare and install cloud provider before you want to use the cloud service via Terraform. Refer to this link https://developer.hashicorp.com/terraform/language/providers. In your code try adding AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.48.0"
}
}
}
provider "aws" {
# Configuration options
}
Then run terraform init to install the provider.

How to structure terraform code to get Lambda ARN after creation?

This was a previous question I asked: How to get AWS Lambda ARN using Terraform?
This question was answered but turns out didn't actually solve my problem so this is a follow up.
The terraform code I have written provisions a Lambda function:
Root module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
profile = var.aws_profile
}
module "aws_lambda_function" {
source = "./modules/lambda_function"
}
Child module:
resource "aws_lambda_function" "lambda_function" {
function_name = "lambda_function"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
filename = "./task/dist/package.zip"
role = aws_iam_role.lambda_exec.arn
}
resource "aws_iam_role" "lambda_exec" {
name = "aws_iam_lambda"
assume_role_policy = file("policy.json")
}
What I want the user to be able to do to get the Lambda ARN:
terraform output
The problem: I cannot seem to include the following code anywhere in my terraform code as it causes a "ResourceNotFOundException: Function not found..." error.
data "aws_lambda_function" "existing" {
function_name = "lambda_function"
}
output "arn" {
value = data.aws_lambda_function.existing.arn
}
Where or how do I need to include this to be able to get the ARN or is this possible?
You can't lookup the data for a resource you are creating at the same time. You need to output the ARN from the module, and then output it again from the main terraform template.
In your Lambda module:
output "arn" {
value = aws_lambda_function.lambda_function.arn
}
Then in your main file:
output "arn" {
value = module.aws_lambda_function.arn
}

Prevent KeyVault from updating secrets using Terraform

I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.

Lambda chaining - Invoke lambda from another lambda using terraform

I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)

Resources