Prevent KeyVault from updating secrets using Terraform - continuous-integration

I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}

Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.

I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.

Related

Event bus name not registering when attempting to connect eventbridge and lambda using terraform

I am attempting to create an Eventbridge that will get notifications from Datadog, and trigger a lambda function to store the notifications to an S3 bucket. This is all going to be done through Terraform.
The following is the code I have written:
###################################
# Eventbridge Integration to Lambda
###################################
data "aws_cloudwatch_event_source" "datadog_event_source" {
name_prefix = var.spog_event_bus_name # aws.partner/datadog.com/my_eventbus
}
resource "aws_cloudwatch_event_bus" "datadog_event_bus" {
name = data.aws_cloudwatch_event_source.datadog_event_source.name
event_source_name = data.aws_cloudwatch_event_source.datadog_event_source.name
}
resource "aws_cloudwatch_event_rule" "spog_cloudwatch_rule" {
name = "spog_cloudwatch_rule"
event_bus_name = aws_cloudwatch_event_bus.datadog_event_bus.name
event_pattern = <<EOF
{
"account": [
"${var.aws_account_id}"
]
}
EOF
}
resource "aws_cloudwatch_event_target" "spog_cloudwatch_event_target" {
rule = aws_cloudwatch_event_rule.spog_cloudwatch_rule.name
target_id = aws_lambda_function.write_datadog_events.function_name
arn = aws_lambda_function.write_datadog_events.arn
}
resource "aws_lambda_permission" "spog_allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.write_datadog_events.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.spog_cloudwatch_rule.arn
}
Here, the write_datadog_events is the lambda function to store the notifications.
The build succeeds, but when I try to apply the plan, I get an error saying that "validationException: EventBus name starting with 'aws.' is not valid". From inspecting the aws console, it seems that the actual eventbridge and rule are created successfully, but the event bus name is not registered properly on the eventbridge. The event bus name only says default, and I thought by changing the event_bus_name value within aws_cloudwatch_event_rule, it would not be default, but I was wrong.
Can anyone help me out with this. The lambda function itself is not wrong (since I ran a test case on it), it seems that the core issue is eventbridge not registering my event bus name. Also, although the rule is generated, the event bus is never generated.
Thanks for your help.

Configure Resource "aws_cloud_distribution" with ec2 as the origin with Terraform

I am setting up a "aws_cloud_distribution" with Terraform and attempting to set an ec2 as my origin.
In my module I have:
origin {
domain_name = var.domain_name
origin_id = var.origin_id
}
In the main file I call this module and use the output of the ec2 public dns.
module "cloudfront" {
source = "./modules/cloudfront"
domain_name = module.ec2.ec2_public_dns
origin_id = "myid"
target_origin_id = "myid"
}
When I run plan, I have no issues. However when I run apply and begin the build process I get the following error:
error creating CloudFront Distribution: InvalidArgument: The parameter
Origin DomainName does not refer to a valid S3 bucket.
status code: 400
I am using terraform 0.13.6 out of some company restrictions to other infra in the company. Is this a Terraform version issue or am I missing something in my configuration steps?
So, I figured out this issue by adding the custom_origin_config argument within the origin argument. The solution looks like the following:
origin {
domain_name = var.domain_name
origin_id = var.origin_id
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1"]
}
}
Terraform defaults to S3 origin if you don't define the custom_origin_config argument. The AWS plugin for Terraform is searching for an s3 bucket and not an AWS FQDN to resolve.

provisioning an EC2 instance with terraform InvalidKeyPair.NotFound

I've created a key pair for EC2 called terraform, downloaded the pem file to the same directory where my terraform files live, I issue a terraform apply and I get:
aws_instance.windows: Creating...
Error: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'terraform' does not exist
status code: 400, request id: 1ac563d4-244a-4371-bde7-ee9bcf048830
I'm specifying the name of the key-value pair via an envrionment variable. This is the start of the block I'm using to create the Windows virtual machine:
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_rdp_winrm.id]
associate_public_ip_address = true
subnet_id = aws_subnet.subnet1.id
get_password_data = "true"
user_data = file("scripts/user_data.txt")
There is obviously something I'm doing wrong, do I need to tell terraform which aws region then key pair resides in ?
The key pairs are regional, so if you created them in one region, they aren't available in the other.
Terraform will always try to find and use the key in the region that you tell it to run in and if the key is not present, AWS will complain about this error.
Terraform also doesn't like it when things are created out of band and you might run into complications. It's also much cleaner to create the keypair using terraform and you can reference it as Atul has posted in his answer.
You could also import the key into Terraform or use Terraform's data sources to search and find the key as alternatives but these are a bit advanced, especially if you're getting started with Terraform.
You need to create a key pair first before consuming it.
resource "aws_key_pair" "my_key_pair" {
key_name = var.key_name
public_key = file("${abspath(path.cwd)}/my-key.pub")
}
Now use the key as
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = aws_key_pair.my_key_pair.key_name
I'll only provide an answer that pertains to your error message directly, as this question came up first on a Bing search.
The Data Source documentation doesn't make mention of Keypair. It might be presumed from the Instances Availability Zone.
data "aws_key_pair" "example" {
key_name = "terraform"
filter {
name = "tag:Component"
values = ["web"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
key_name = data.aws_key_pair.example.key_name
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
Instance resource declaration: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/resources/instance
AWS Keypair Data Source: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/data-sources/key_pair

terraform cli does not accept aws credentials

I am trying to create ec2 instance using terraform. Passing credentials through terraform cli fails, while hardcoding it in main.tf works fine
This is to create ec2 instance dynamically using terraform
terraform apply works with following main.tf
provider "aws" {
region = "us-west-2"
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
resource "aws_instance" "ec2-instance" {
ami = "ami-id"
instance_type = "t2.micro"
tags {
Name = "test-inst"
}
}
while the following does not work:
terraform apply -var access_key="hard-coded-access-key" -var secret_key="hard-coded-secret-key"
Is there any difference in the above two ways of running the commands? As per terraform documentation both of the above should work.
Every terraform module can use input variables, including the main module. But before using input variables, you must declare them.
Create a variables.tf file on the same folder you have your main.tf file:
variable "credentials" {
type = object({
access_key = string
secret_key = string
})
description = "My AWS credentials"
}
Then you can reference input variables in your code like that:
provider "aws" {
region = "us-west-2"
access_key = var.credentials.access_key
secret_key = var.credentials.secret_key
}
And you can either run:
terraform apply -var credentials.access_key="hard-coded-access-key" -var credentials.secret_key="hard-coded-secret-key"
Or you could create a terraform.tfvars file with the following content:
# ------------------
# AWS Credentials
# ------------------
credentials= {
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
And then simply run terraform apply.
But the key point is that you must declare input variables before using them.
The #Felipe answer is right, But I will never recommend defining Access key and secret key in variables.tf, What you have to to do is to left it blinks and set keys using aws configure or other options is to create keys for terraform deployment purpose only using aws configure --profile terraform or without profile aws configur
so your connection.tf or main.tf will look like this,
provider "aws" {
#You can use an AWS credentials file to specify your credentials.
#The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%\.aws\credentials" for Windows users
region = "us-west-2"
# profile configured during aws configure --profile
profile = "terraform"
# you can also restrict account here, to allow particular account for deployment
allowed_account_ids = ["12*****45"]
}
You can also tell separate file for secret key and access key, The reason behind this is, as Variables.tf is part of your configuration language or bitbucket, so it's better to not place these sensitive keys in variables.tf
You can create a file somewhere in your system and give the path of the keys in the provider section.
provider "aws" {
region = "us-west-2"
shared_credentials_file = "$HOME/secret/credentials"
}
Here is the format of the credentials file
[default]
aws_access_key_id = A*******Z
aws_secret_access_key = A*******/***xyz

Unable to utilize the existing pem file to create EC2 instanceby terraform

I am wondering how to stop the infinite loop in the error message so that it creates AWS EC2 instance?
Terraform code below:
provider "aws" {
region = "${var.location}"
}
resource "aws_instance" "ins1_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-1",
]
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = false
}
resource "aws_instance" "ins2_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-2",
]
}
tags = {
Name = "cluster"
}
}
resource "aws_eip" "ins2_eip" {
instance = "${aws_instance.ins2_ec2.id}"
vpc = false
}
It errors out with the below message:
* aws_instance.ins2_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
* aws_instance.ins1_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
I have a pem file on my laptop which I can get it on my AWS Build server, so I can use key_name in EC2 instance creation? The pem file name as "test.pem" which I have is the private key?
What I don't know is how to login to VM, with key_name (test.pem) which I already have or with username/password. There does not seem to be a provision to create username and password in aws_instance block.
Terraform EC2 instance documentation is at the link below:
https://www.terraform.io/docs/providers/aws/r/instance.html
If you want to attach a key to an EC2 instance while you create it using terraform, you need to first create a key on AWS console, download the .pem file and copy the Key pair name to the clip board.
Terraform script requires the correct key name to associate it to the ec2 instance.
If you want to perform any remote action to the instance from the terraform, following things are required.
The instance should have the IP which terraform can connect to.
Terraform need to connect to the instance via SSH or RDP.
Both the ways require the key file (.pem file) downloaded earlier to be used while making the connection.
So connection is the missing part here in the terraform configuration.
Consider following terraform configuration for creating one t1.micro instance with a key associated with it and then creating a file on the instance by doing SSH into it.
Network requirements, such as vpc, subnet, route tables, internet gateway, security groups etc., are already created in AWS console and theirs respective Ids are being used in the terraform configuration below.
provider "aws" {
region = "<<region>>",
access_key="<<access_key>>",
secret_key="<<secret_key>>"
}
resource "aws_instance" "ins1_ec2" {
ami = "<<ami_id>>"
instance_type = "<<instance_type>>"
//id of the public subnet so that the instance is accessible via internet to do SSH
subnet_id = "<<subnet_id>>"
//id of the security group which has ports open to all the IPs
vpc_security_group_ids=["<<security_group_id>>"]
//assigning public IP to the instance is required.
associate_public_ip_address=true
key_name = "<<key_name>>"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = true
}
When you run terraform apply command, if the terraform is able to do SSH to the instance, it should display following message.
You might still see errors, if the commands being executed fails due to some other error or permission issues. But if you see message as above, it means that the terraform has connected to the instance successfully.
That's the terraform configuration which will create an ec2 instance, connect to it via SSH and perform remote execution tasks on it.
The .pem file can also be used to do SSH on the instance from local machine.
This should help you resolve your issue.
More information about connection in terraform is available here
The following did work for me,
Create a security group and make sure you added SSH (port 22) with source 0.0.0.0/0 in inbound rules
Copy the ID of the security group and add it in the terraform config for key vpc_security_group_ids list
Head to AWS console, and either create a new key pair or locate the existing key to use.
Get the name of the key pair from console and refer it in terraform config for key key_name
If you created a new key make sure you downloaded the pem file and changed the permission as chmod 400 myPrivateKey.pem
Once after you applied the terraform config, just connect as ssh -i myPrivateKey.pem ec2-user#<public-ip>
Your terraform config for ec2 resource will looks like,
resource "aws_instance" "my-sample" {
ami = "ami-xxxxx"
instance_type = "t2.micro"
associate_public_ip_address = true
key_name = "MyPrivateKey"
vpc_security_group_ids = ["sg-0f073685ght54lkm"]
}

Resources