Terraform output configuration - amazon-ec2

I have a Terraform problem. I want to display output like availability zone. I have 2 EC2 instances, and a datasource to fetch details like availability zone for one of the instances through filter parameter, and forward the result to an output, but I am getting an error.
Main.tf
resource "aws_instance" "ec2-1"{
ami="ami-0a54aef4ef3b5f881"
instance_type="t2.micro"
tags={
Name="Instance-1"
}
depends_on=[aws_instance.ec2-2]
}
resource "aws_instance" "ec2-2"{
ami="ami-026dea5602e368e96"
instance_type="t2.micro"
tags={
Name="Instance-2"
}
}
data "aws_instance" "instancesearch"{
filter {
name="tag:Name"
values=["Instance-2"]
}
}
output "instanceid"{
value = data.aws_instance.instancesearch.availability_zone
}
terraform plan result:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_instance.instancesearch: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
on main.tf line 19, in data "aws_instance" "instancesearch":
19: data "aws_instance" "instancesearch"{

You do not need to use a separate data source to get the availability zone. You can get it as an attribute on the resource itself. Try this:
output "availability_zone" {
value = aws_instance.ec2-2.availability_zone
}

Related

How to get AWS Lambda ARN using Terraform?

I am trying to define a terraform output block that returns the ARN of a Lambda function. The Lambda is defined in a sub-module. According to the documentation it seems like the lambda should just have an ARN attribute already: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_function#arn
Using that as a source I thought I should be able to do the following:
output "lambda_arn" {
value = module.aws_lambda_function.arn
}
This generates the following error:
Error: Unsupported attribute
on main.tf line 19, in output "lambda_arn":
19: value = module.aws_lambda_function.arn
This object does not have an attribute named "arn".
I would appreciate any input, thanks.
Documentation is correct. Data source data.aws_lambda_function has arn attribute. However, you are trying to access the arn from a custom module module.aws_lambda_function. To do this you have to define output arn in your module.
So in your module you should have something like this:
data "aws_lambda_function" "existing" {
function_name = "function-to-get"
}
output "arn" {
value = data.aws_lambda_function.existing.arn
}
Then if you have your module called aws_lambda_function:
module "aws_lambda_function" {
source = "path-to-module"
}
you will be able to access the arn:
module.aws_lambda_function.arn

provisioning an EC2 instance with terraform InvalidKeyPair.NotFound

I've created a key pair for EC2 called terraform, downloaded the pem file to the same directory where my terraform files live, I issue a terraform apply and I get:
aws_instance.windows: Creating...
Error: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'terraform' does not exist
status code: 400, request id: 1ac563d4-244a-4371-bde7-ee9bcf048830
I'm specifying the name of the key-value pair via an envrionment variable. This is the start of the block I'm using to create the Windows virtual machine:
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_rdp_winrm.id]
associate_public_ip_address = true
subnet_id = aws_subnet.subnet1.id
get_password_data = "true"
user_data = file("scripts/user_data.txt")
There is obviously something I'm doing wrong, do I need to tell terraform which aws region then key pair resides in ?
The key pairs are regional, so if you created them in one region, they aren't available in the other.
Terraform will always try to find and use the key in the region that you tell it to run in and if the key is not present, AWS will complain about this error.
Terraform also doesn't like it when things are created out of band and you might run into complications. It's also much cleaner to create the keypair using terraform and you can reference it as Atul has posted in his answer.
You could also import the key into Terraform or use Terraform's data sources to search and find the key as alternatives but these are a bit advanced, especially if you're getting started with Terraform.
You need to create a key pair first before consuming it.
resource "aws_key_pair" "my_key_pair" {
key_name = var.key_name
public_key = file("${abspath(path.cwd)}/my-key.pub")
}
Now use the key as
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = aws_key_pair.my_key_pair.key_name
I'll only provide an answer that pertains to your error message directly, as this question came up first on a Bing search.
The Data Source documentation doesn't make mention of Keypair. It might be presumed from the Instances Availability Zone.
data "aws_key_pair" "example" {
key_name = "terraform"
filter {
name = "tag:Component"
values = ["web"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
key_name = data.aws_key_pair.example.key_name
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
Instance resource declaration: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/resources/instance
AWS Keypair Data Source: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/data-sources/key_pair

attaching different Security Groups to different EC2s

Requirement:-
I have multiple group(say 2 groups) of EC2s where each group contain 6 EC2. and I have to attach different SG to each group.
Example:-
Group1 contains :- Head1, child :EC2-1, EC2-2....6 and need to attach SG1
Group2 contains :- Head2, child :EC2-3, EC2-4 ...6 and need to attach SG2
I don't want to write separate resource "aws_instance"
Head-Module
resource "aws_security_group" "sg" {
count = var.ec2_instance_count
name = "${local.account}${count.index}"
vpc_id = local.vpc_id
}
resource "aws_instance" "ec2_instance" {
count = var.ec2_instance_count
security_groups = [element(aws_security_group.sg.*.id, count.index)]
}
Child-Module:
data "aws_security_groups" "data_security_group" {
filter {
name = "group-name"
values = ["${local.account}${count.index}"]
}
}
resource "aws_instance" "ec2_child" {
count = var.ec2_instance_count*var.numberofchild
security_groups = [element(aws_security_group.data_security_group.*.id, count.index)]
}
Error: Error launching source instance: InvalidGroup.NotFound: The security group 'terraform-2020082
4151444795600000001' does not exist in VPC 'vpc-ghhje85abcy'
status code: 400, request id: 9260fd88-a03a-4c46-b67c-3287594cdab5
on main.tf line 68, in resource "aws_instance" "ec2_instance":
68: resource "aws_instance" "ec2_instance" {
Note: I am using data "aws_security_groups" instead of data "aws_security_group". If I use the later one, I know I will be able to get only one SG in the data resource and it throws me an error :multiple Security Groups matched; from which I kind of moved ahead by using data "aws_security_groups" and this error get vanished. but the latest error I m facing is: InvalidGroup.NotFound as mentioned above.
Update: I am able to use data resource and able to attach the different SG to different EC2. the only issue is random Sequencing. for all 6 EC2 of group 1 I want them to assign first SG and so on.
Don't use the data, instead create your resource "aws_security_group" using a count like you do on your resource "aws_instance" that way you can reference them directly...
resource "aws_security_group" "sg" {
count = var.ec2_instance_count
name = "${local.account}${count.index}"
vpc_id = local.vpc_id
}
resource "aws_instance" "ec2_instance" {
count = var.ec2_instance_count
security_groups = [element(aws_security_group.sg.*.id, count.index)]
}
Thanks Helder, I created the resource with count. its not the huge infrastructure but a fairly complex one. 8 groups (each group has 1 Parent and 6 child EC2)
There is 1 External SG for all parents. and 8 internal SG 1 for each group).
I had to follow a sequence of provisioning because the requirement was to pass the "Parent Host name" to respective group of Childs in "Childs User data" so I had to keep them in a separate modules and used "data" resources for reuse.
ParentModule:
resource "aws_instance" "ec2_instance" {
count = tonumber(var.mycount)
vpc_security_group_ids = [data.aws_security_group.external_security_group.id, element(data.aws_security_group.internal_security_group.*.id, count.index)]
...
}
resource "aws_security_group" "internal_security_group" {
count = tonumber(var.mycount)
name = "${var.internalSGname}${count.index}"
}
resource "aws_security_group" "external_security_group" {
name = ${var.external_sg_name}"
}
ChildModule: with Data resource and used dynamic map to assignment of SG to Proper group of EC2.
data "aws_security_group" "internal_security_group" {
count = tonumber(var.mycount)
filter {
name = "group-name"
values = "${var.internalSGname}${count.index}"]
}
}
resource "aws_instance" "ec2_child" {
count = local.child_count * tonumber(var.mycount)
vpc_security_group_ids = ["${element(data.aws_security_group.internal_security_group.*.id, "${lookup(local.SG_lookup, count.index, 99)}")}"]
variable.tf
locals{
SG_lookup = {
for n in range(0, (local.child_count * tonumber(var.mycount))) :
n => "${floor(((n) / local.child_count))}"
}
}
}

Prevent KeyVault from updating secrets using Terraform

I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.

Unable to utilize the existing pem file to create EC2 instanceby terraform

I am wondering how to stop the infinite loop in the error message so that it creates AWS EC2 instance?
Terraform code below:
provider "aws" {
region = "${var.location}"
}
resource "aws_instance" "ins1_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-1",
]
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = false
}
resource "aws_instance" "ins2_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-2",
]
}
tags = {
Name = "cluster"
}
}
resource "aws_eip" "ins2_eip" {
instance = "${aws_instance.ins2_ec2.id}"
vpc = false
}
It errors out with the below message:
* aws_instance.ins2_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
* aws_instance.ins1_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
I have a pem file on my laptop which I can get it on my AWS Build server, so I can use key_name in EC2 instance creation? The pem file name as "test.pem" which I have is the private key?
What I don't know is how to login to VM, with key_name (test.pem) which I already have or with username/password. There does not seem to be a provision to create username and password in aws_instance block.
Terraform EC2 instance documentation is at the link below:
https://www.terraform.io/docs/providers/aws/r/instance.html
If you want to attach a key to an EC2 instance while you create it using terraform, you need to first create a key on AWS console, download the .pem file and copy the Key pair name to the clip board.
Terraform script requires the correct key name to associate it to the ec2 instance.
If you want to perform any remote action to the instance from the terraform, following things are required.
The instance should have the IP which terraform can connect to.
Terraform need to connect to the instance via SSH or RDP.
Both the ways require the key file (.pem file) downloaded earlier to be used while making the connection.
So connection is the missing part here in the terraform configuration.
Consider following terraform configuration for creating one t1.micro instance with a key associated with it and then creating a file on the instance by doing SSH into it.
Network requirements, such as vpc, subnet, route tables, internet gateway, security groups etc., are already created in AWS console and theirs respective Ids are being used in the terraform configuration below.
provider "aws" {
region = "<<region>>",
access_key="<<access_key>>",
secret_key="<<secret_key>>"
}
resource "aws_instance" "ins1_ec2" {
ami = "<<ami_id>>"
instance_type = "<<instance_type>>"
//id of the public subnet so that the instance is accessible via internet to do SSH
subnet_id = "<<subnet_id>>"
//id of the security group which has ports open to all the IPs
vpc_security_group_ids=["<<security_group_id>>"]
//assigning public IP to the instance is required.
associate_public_ip_address=true
key_name = "<<key_name>>"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = true
}
When you run terraform apply command, if the terraform is able to do SSH to the instance, it should display following message.
You might still see errors, if the commands being executed fails due to some other error or permission issues. But if you see message as above, it means that the terraform has connected to the instance successfully.
That's the terraform configuration which will create an ec2 instance, connect to it via SSH and perform remote execution tasks on it.
The .pem file can also be used to do SSH on the instance from local machine.
This should help you resolve your issue.
More information about connection in terraform is available here
The following did work for me,
Create a security group and make sure you added SSH (port 22) with source 0.0.0.0/0 in inbound rules
Copy the ID of the security group and add it in the terraform config for key vpc_security_group_ids list
Head to AWS console, and either create a new key pair or locate the existing key to use.
Get the name of the key pair from console and refer it in terraform config for key key_name
If you created a new key make sure you downloaded the pem file and changed the permission as chmod 400 myPrivateKey.pem
Once after you applied the terraform config, just connect as ssh -i myPrivateKey.pem ec2-user#<public-ip>
Your terraform config for ec2 resource will looks like,
resource "aws_instance" "my-sample" {
ami = "ami-xxxxx"
instance_type = "t2.micro"
associate_public_ip_address = true
key_name = "MyPrivateKey"
vpc_security_group_ids = ["sg-0f073685ght54lkm"]
}

Resources