How create multiple VCN at the same time - oracle

I have a problem, i need create more than one VCN.
I want to set variables into a JSON file.
like this:
init_values.json
{
"terraform": {
"tenancy_ocid": "ocid1.ten.xxxxxxxxxxxxxxxxxx",
"user_ocid": "ocid1.user..xxxxxxxxxxxxxxxxxx",
"private_key_path": "/Users/user/.oci/oci_api_key.pem",
"fingerprint": "a8:8e:.xxxxxxxxxxxxxxxxxx",
"region": "eu-frankfurt-1"
},
"vcn": [
{
"name": "vcn_1",
"cidr": "44.144.224.0/25"
},
{
"name": "vcn_2",
"cidr": "44.144.224.128/25"
}
]
}
and my vcn.tf file lokks like this
locals {
vcn_data = jsondecode(file("${path.module}/init_values.json"))
all_vcn = [for my_vcn in local.vcn_data.vcn : my_vcn.name ]
all_cidr = [for my_cidr in local.vcn_data.vcn : my_cidr.cidr ]
}
resource "oci_core_vcn" "these" {
compartment_id = local.json_data.COMPARTMENT.root_compartment
display_name = local.all_vcn
cidr_block = local.all_cidr
}
and provider.tf is:
provider "oci" {
//alias = "home"
tenancy_ocid = local.json_data.TERRAFORM.tenancy_ocid
user_ocid = local.json_data.TERRAFORM.user_ocid
private_key_path = local.json_data.TERRAFORM.private_key_path
fingerprint = local.json_data.TERRAFORM.fingerprint
region = local.json_data.TERRAFORM.region
}
and the error is the next:
│ Error: Incorrect attribute value type
│
│ on vcn.tf line 39, in resource "oci_core_vcn" "these":
│ 39: display_name = local.all_vcn
│ ├────────────────
│ │ local.all_vcn is tuple with 2 elements
│
│ Inappropriate value for attribute "display_name": string required.
╵
what could be my error, where i'm wrong
Thanks

Probably, instead of:
all_cidr = [for my_cidr in local.vcn_data.vcn : my_vcn.cidr ]
it should be:
all_cidr = [for my_cidr in local.vcn_data.vcn : my_cidr.cidr ]
Update
You have to use count or for_each to create multiple vcns:
resource "oci_core_vcn" "these" {
count = length(local.all_vcn)
compartment_id = local.json_data.COMPARTMENT.root_compartment
display_name = local.all_vcn[each.index]
cidr_block = local.all_cidr[each.index]
}

Related

how to describe stop / start AWS EC2 instances process by schedule in Terraform

I trying figure out how to simply start / stop EC2 instances by schedule and EventBridge
This behavior may be easily set via AWS WEB console (EventBridge → Rules → Create Rule → AWS service → EC2 StopInstances API call):
But I can't figure out how to describe this rule in Terraform
Only possible way that I found is to create Lambda. But it looks like a huge overhead for this simple action. Is here any way to add EC2 StopInstances API call Rule with Terraform?
Okay, looks like it is possible to control instance running time with SSM Automation:
variables.tf
variable "start_cron_representation" {
type = string
}
variable "stop_cron_representation" {
type = string
}
variable "instance_id" {
type = string
}
variable "instance_type" {
description = "ec2 or rds"
type = string
}
locals.tf
locals {
stop_task_name = var.instance_type == "rds" ? "AWS-StopRdsInstance" : "AWS-StopEC2Instance"
start_task_name = var.instance_type == "rds" ? "AWS-StartRdsInstance" : "AWS-StartEC2Instance"
permissions = var.instance_type == "rds" ? [
"rds:StopDBInstance",
"rds:StartDBInstance",
"rds:DescribeDBInstances",
"ssm:StartAutomationExecution"
] : [
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:DescribeInstances",
"ssm:StartAutomationExecution"
]
}
main.tf
data "aws_iam_policy_document" "ssm_lifecycle_trust" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ssm.amazonaws.com",
"events.amazonaws.com"
]
}
}
}
data "aws_iam_policy_document" "ssm_lifecycle" {
statement {
effect = "Allow"
actions = local.permissions
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"iam:PassRole"
]
resources = [aws_iam_role.ssm_lifecycle.arn]
}
}
resource "aws_iam_role" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-role"
assume_role_policy = data.aws_iam_policy_document.ssm_lifecycle_trust.json
}
resource "aws_iam_policy" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-policy"
policy = data.aws_iam_policy_document.ssm_lifecycle.json
depends_on = [
aws_iam_role.ssm_lifecycle
]
}
resource "aws_iam_role_policy_attachment" "ssm_lifecycle" {
policy_arn = aws_iam_policy.ssm_lifecycle.arn
role = aws_iam_role.ssm_lifecycle.name
}
resource "aws_cloudwatch_event_rule" "stop_instance" {
name = "stop-${var.instance_type}"
description = "Stop ${var.instance_type} instance"
schedule_expression = var.stop_cron_representation
}
resource "aws_cloudwatch_event_target" "stop_instance" {
target_id = "stop-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.stop_task_name}"
rule = aws_cloudwatch_event_rule.stop_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
resource "aws_cloudwatch_event_rule" "start_instance" {
name = "start-${var.instance_type}"
description = "Start ${var.instance_type} instance"
schedule_expression = var.start_cron_representation
}
resource "aws_cloudwatch_event_target" "start_instance" {
target_id = "start-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.start_task_name}"
rule = aws_cloudwatch_event_rule.start_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
This module may be called like:
module "ec2_start_and_stop" {
source = "./module_folder"
start_cron_representation = "cron(0 0 * * ? *)"
stop_cron_representation = "cron(0 1 * * ? *)"
instance_id = aws_instance.instance_name.id # or aws_db_instance.db.id for RDS
instance_type = "ec2" # or "rds" for RDS
depends_on = [
aws_instance.instance_name
]
}

Error: error collecting instance settings: empty result when running terraform apply (terraform plan works fine)

I don't quite understand how a terraform directory is meant to be setup but mine seems pretty basic. it keeps complaining about empty values though they are set. can someone please take a look and tell me what could be the issue?
snippet of .tf:
provider "aws" {
region = var.region
default_tags {
tags = {
source = "/home/ubuntu/bootcamp-terraform-master"
owner_name = var.owner-name
owner_email = var.owner-email
purpose = var.purpose
}
}
}
// Resources
resource "aws_instance" "zookeepers" {
count = var.zk-count
ami = var.aws-ami-id
instance_type = var.zk-instance-type
key_name = var.key-name
root_block_device {
volume_size = 100
}
tags = {
Name = "${var.owner-name}-zookeeper-${count.index}"
"bootcamp2.tf" 269L, 7806C 14,0-1 Top
provider "aws" {
region = var.region
default_tags {
tags = {
source = "/home/ubuntu/bootcamp-terraform-master"
owner_name = var.owner-name
owner_email = var.owner-email
purpose = var.purpose
}
}
}
// Resources
resource "aws_instance" "zookeepers" {
count = var.zk-count
ami = var.aws-ami-id
instance_type = var.zk-instance-type
key_name = var.key-name
root_block_device {
volume_size = 100
}
tags = {
Name = "${var.owner-name}-zookeeper-${count.index}"
description = "zookeeper nodes - Managed by Terraform"
role = "zookeeper"
zookeeperid = count.index
Schedule = "zookeeper-mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "zookeepers-${var.region}"
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "zookeepers" {
count = var.zk-count
zone_id = var.hosted-zone-id
name = "zookeeper-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.zookeepers.*.private_ip, count.index)}"]
}
resource "aws_instance" "brokers" {
count = var.broker-count
ami = var.aws-ami-id
instance_type = var.broker-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
# security_groups = ["${var.security_group}"]
key_name = var.key-name
root_block_device {
volume_size = 64 # 64 GB
}
tags = {
Name = "${var.owner-name}-broker-${count.index}"
description = "broker nodes - Managed by Terraform"
nice-name = "kafka-${count.index}"
big-nice-name = "follower-kafka-${count.index}"
brokerid = count.index
role = "broker"
sshUser = var.linux-user
# sshPrivateIp = true // this is only checked for existence, not if it's true or false by terraform.py (ati)
createdBy = "terraform"
Schedule = "kafka-mon-8am-fri-6pm"
# ansible_python_interpreter = "/usr/bin/python3"
#EntScheduler = "mon,tue,wed,thu,fri;1600;mon,tue,wed,thu;fri;sat;0400;"
region = var.region
role_region = "brokers-${var.region}"
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "brokers" {
count = var.broker-count
zone_id = var.hosted-zone-id
name = "kafka-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.brokers.*.private_ip, count.index)}"]
}
resource "aws_instance" "connect-cluster" {
count = var.connect-count
ami = var.aws-ami-id
instance_type = var.connect-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
key_name = var.key-name
tags = {
Name = "${var.owner-name}-connect-${count.index}"
description = "Connect nodes - Managed by Terraform"
role = "connect"
Schedule = "mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "connect-${var.region}"
}
root_block_device {
volume_size = 20 # 20 GB
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "connect-cluster" {
count = var.connect-count
zone_id = var.hosted-zone-id
name = "connect-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.connect-cluster.*.private_ip, count.index)}"]
}
resource "aws_instance" "schema" {
count = var.schema-count
ami = var.aws-ami-id
instance_type = var.schema-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
key_name = var.key-name
tags = {
Name = "${var.owner-name}-schema-${count.index}"
description = "Schema nodes - Managed by Terraform"
role = "schema"
Schedule = "mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "schema-${var.region}"
}
root_block_device {
volume_size = 20 # 20 GB
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "schema" {
count = var.schema-count
zone_id = var.hosted-zone-id
name = "schema-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.schema.*.private_ip, count.index)}"]
}
resource "aws_instance" "control-center" {
count = var.c3-count
ami = var.aws-ami-id
instance_type = var.c3-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
key_name = var.key-name
root_block_device {
volume_size = 64 # 64GB
}
tags = {
Name = "${var.owner-name}-control-center-${count.index}"
description = "Control Center nodes - Managed by Terraform"
role = "schema"
Schedule = "mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "schema-${var.region}"
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "control-center" {
count = var.c3-count
zone_id = var.hosted-zone-id
name = "controlcenter-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.control-center.*.private_ip, count.index)}"]
}
resource "aws_instance" "rest" {
count = var.rest-count
ami = var.aws-ami-id
instance_type = var.rest-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
key_name = var.key-name
root_block_device {
volume_size = 20 # 20 GB
}
tags = {
Name = "${var.owner-name}-rest-${count.index}"
description = "Rest nodes - Managed by Terraform"
role = "schema"
Schedule = "mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "schema-${var.region}"
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "rest" {
count = var.rest-count
zone_id = var.hosted-zone-id
name = "rest-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.rest.*.private_ip, count.index)}"]
}
resource "aws_instance" "ksql" {
count = var.ksql-count
ami = var.aws-ami-id
instance_type = var.ksql-instance-type
availability_zone = var.availability-zone[count.index % length(var.availability-zone)]
key_name = var.key-name
root_block_device {
volume_size = 64 # 64 GB
}
tags = {
Name = "${var.owner-name}-ksql-${count.index}"
description = "Rest nodes - Managed by Terraform"
role = "schema"
Schedule = "mon-8am-fri-6pm"
sshUser = var.linux-user
region = var.region
role_region = "schema-${var.region}"
}
subnet_id = var.subnet-id[count.index % length(var.subnet-id)]
vpc_security_group_ids = var.vpc-security-group-ids
associate_public_ip_address = true
}
resource "aws_route53_record" "ksql" {
count = var.ksql-count
zone_id = var.hosted-zone-id
name = "ksql-${count.index}.${var.dns-suffix}"
type = "A"
ttl = "300"
records = ["${element(aws_instance.ksql.*.private_ip, count.index)}"]
}
terraform plan runs fine but I keep running into these errors when running terraform apply
Error: error collecting instance settings: empty result
│
│ with aws_instance.zookeepers[1],
│ on bootcamp2.tf line 17, in resource "aws_instance" "zookeepers":
│ 17: resource "aws_instance" "zookeepers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.zookeepers[0],
│ on bootcamp2.tf line 17, in resource "aws_instance" "zookeepers":
│ 17: resource "aws_instance" "zookeepers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.zookeepers[2],
│ on bootcamp2.tf line 17, in resource "aws_instance" "zookeepers":
│ 17: resource "aws_instance" "zookeepers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.brokers[0],
│ on bootcamp2.tf line 53, in resource "aws_instance" "brokers":
│ 53: resource "aws_instance" "brokers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.brokers[1],
│ on bootcamp2.tf line 53, in resource "aws_instance" "brokers":
│ 53: resource "aws_instance" "brokers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.brokers[2],
│ on bootcamp2.tf line 53, in resource "aws_instance" "brokers":
│ 53: resource "aws_instance" "brokers" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.connect-cluster[0],
│ on bootcamp2.tf line 97, in resource "aws_instance" "connect-cluster":
│ 97: resource "aws_instance" "connect-cluster" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.connect-cluster[1],
│ on bootcamp2.tf line 97, in resource "aws_instance" "connect-cluster":
│ 97: resource "aws_instance" "connect-cluster" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.schema[0],
│ on bootcamp2.tf line 131, in resource "aws_instance" "schema":
│ 131: resource "aws_instance" "schema" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.schema[1],
│ on bootcamp2.tf line 131, in resource "aws_instance" "schema":
│ 131: resource "aws_instance" "schema" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.control-center[0],
│ on bootcamp2.tf line 165, in resource "aws_instance" "control-center":
│ 165: resource "aws_instance" "control-center" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.rest[0],
│ on bootcamp2.tf line 200, in resource "aws_instance" "rest":
│ 200: resource "aws_instance" "rest" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.ksql[0],
│ on bootcamp2.tf line 236, in resource "aws_instance" "ksql":
│ 236: resource "aws_instance" "ksql" {
│
╵
╷
│ Error: error collecting instance settings: empty result
│
│ with aws_instance.ksql[1],
│ on bootcamp2.tf line 236, in resource "aws_instance" "ksql":
│ 236: resource "aws_instance" "ksql" {
all the variables are set in the variables.tf file and references are made to the .tfvars file:
variable "owner-name" {
default = "wetfwefwef"
}
variable "owner-email" {
default = "stwfefxef.io"
}
variable "dns-suffix" {
default = "srgrwgsofxfwegwegia"
description = "Suffix for DNS entry in Route 53. No spaces!"
}
variable "purpose" {
default = "rhwgrwx"
}
variable "key-name" {
default = "tertqwf"
}
variable "zk-count" {
default = 3
}
variable "broker-count" {
default = 3
}
variable "connect-count" {
default = 2
}
variable "schema-count" {
default = 2
}
variable "rest-count" {
default = 1
}
variable "c3-count" {
default = 1
}
variable "ksql-count" {
default = 2
}
variable "zk-instance-type" {
default = "t3a.large"
}
variable "broker-instance-type" {
default = "t3a.large"
}
variable "schema-instance-type" {
default = "t3a.large"
}
variable "connect-instance-type" {
default = "t3a.large"
}
variable "rest-instance-type" {
default = "t3a.large"
}
variable "c3-instance-type" {
default = "t3a.large"
}
variable "ksql-instance-type" {
default = "t3a.large"
}
variable "client-instance-type" {
default = "t3a.large"
}
variable "hosted-zone-id" {
}
variable "aws-ami-id" {
default = "ami-00000000"
}
variable "linux-user" {
default = "ubuntu" // ec2-user
}
variable "vpc-id" {
}
variable "subnet-id" {
type = list(string)
}
variable "vpc-security-group-ids" {
type = list(string)
}
I stumbled on this trying to quickly find the answer for why I was getting the same error.
I'm pretty sure it's because the default AMI you're supplying doesn't exist. Otherwise think you're possibly supplying a bad value as a variable, or the AMI is not shared with the account you're running it in.
In my case, it was the last problem: in the console, I had added the account to share with in the AMI, but hadn't followed up with a save :-/
Error: error collecting instance settings: empty result
isn't very descriptive for diagnosing the problem. It could potentially be some other field not giving results I guess - haven't looked further. If it was a problem with the key pair as suggested in one of the comments, you would have clearly seen in the error message including InvalidKeyPair.NotFound.
To debug further, you can increase the logging, e.g. export TF_LOG=debug

Trying to use terraform to create multiple EC2 instances with separate route 53 records

So I have a project where I'm trying to do something simple like create a reusable project that could create the following:
EC2 SG - 1 per workload
EC2 instances - could be 1 or more
Route 53 records - 1 per EC2 instance created
This project works just fine without any issues with using just 1 instance, but when I increase the count to anything besides 1, I get the following:
Error: Invalid index
│
│ on .terraform/modules/ec2_servers_dns_name/main.tf line 18, in resource "aws_route53_record" "this":
│ 18: records = split(",", var.records[count.index])
│ ├────────────────
│ │ count.index is 1
│ │ var.records is list of string with 1 element
│
│ The given key does not identify an element in this collection value: the
│ given index is greater than or equal to the length of the collection.
In my core main.tf, the modules for EC2 and Route53 look like below:
# EC2 Instances
module "ec2_servers" {
ami = data.aws_ami.server.id
associate_public_ip_address = var.ec2_associate_public_ip_address
disable_api_termination = var.ec2_disable_api_termination
ebs_optimized = var.ec2_ebs_optimized
instance_count = var.ec2_servers_instance_count
instance_dns_names = var.ec2_servers_dns_name_for_tags
instance_type = var.ec2_servers_instance_type
key_name = var.ec2_key_name
monitoring = var.ec2_enhanced_monitoring
name = format("ec2-%s-%s-server",local.tags["application"],local.tags["environment"])
rbd_encrypted = var.ec2_rbd_encrypted
rbd_volume_size = var.ec2_rbd_volume_size
rbd_volume_type = var.ec2_rbd_volume_type
subnet_id = concat(data.terraform_remote_state.current-vpc.outputs.app_private_subnets)
user_data = var.ec2_user_data
vpc_security_group_ids = [module.ec2_security_group_servers.this_security_group_id, var.baseline_sg]
tags = local.tags
}
# Create DNS entry for EC2 Instances
module "ec2_servers_dns_name" {
domain = var.domain_name
instance_count = var.ec2_servers_instance_count
name = var.ec2_servers_dns_name
private_zone = "true"
records = module.ec2_servers.private_ip
ttl = var.ttl
type = var.record_type
providers = {
aws = aws.network
}
}
And the resources (EC2/Route53) in our core module repo are shown below:
EC2
locals {
is_t_instance_type = replace(var.instance_type, "/^t[23]{1}\\..*$/", "1") == "1" ? "1" : "0"
}
resource "aws_instance" "this" {
count = var.instance_count
ami = var.ami
associate_public_ip_address = var.associate_public_ip_address
credit_specification {
cpu_credits = local.is_t_instance_type ? var.cpu_credits : null
}
disable_api_termination = var.disable_api_termination
ebs_optimized = var.ebs_optimized
iam_instance_profile = var.iam_instance_profile
instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior
instance_type = var.instance_type
ipv6_addresses = var.ipv6_addresses
ipv6_address_count = var.ipv6_address_count
key_name = var.key_name
lifecycle {
ignore_changes = [private_ip, root_block_device, ebs_block_device, volume_tags, user_data, ami]
}
monitoring = var.monitoring
placement_group = var.placement_group
private_ip = var.private_ip
root_block_device {
encrypted = var.rbd_encrypted
volume_size = var.rbd_volume_size
volume_type = var.rbd_volume_type
}
secondary_private_ips = var.secondary_private_ips
source_dest_check = var.source_dest_check
subnet_id = element(var.subnet_id, count.index)
tags = merge(tomap({"Name"= var.name}), var.tags, var.instance_dns_names[count.index])
tenancy = var.tenancy
user_data = var.user_data[count.index]
volume_tags = var.volume_tags
vpc_security_group_ids = var.vpc_security_group_ids
}
Route53
data "aws_route53_zone" "this" {
name = var.domain
private_zone = var.private_zone
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_route53_record" "this" {
count = var.instance_count
name = var.name[count.index]
records = split(",", var.records[count.index])
type = var.type
ttl = var.ttl
zone_id = data.aws_route53_zone.this.zone_id
}
It seems like it's possibly with the output for the private IP for EC2, but I'm not sure. Here's the output for the private IP for the EC2 resource
output "private_ip" {
description = "The private IP address assigned to the instance."
value = [aws_instance.this[0].private_ip]
}
And the records variable in the R53 resource is set to a list.
Any thoughts on how to pull the private IP for the EC2 instances (whether it's one or multiple) have the output for each private IP be called dynamically in the R53 module so the R53 record can be created without issue?
Try
output "private_ip" {
description = "The private IP address assigned to the instance."
value = [aws_instance.this[*].private_ip]
}
The 0 returns only one element
EDIT: Add this
tostring()

Terraform. How to identify element (key) using for_each

I'm building AWS Network LB with target groups. And I stuck on aws_lb_listener adding several target_group_arn in several http_tcp_listeners
I.e. I have two aws_lb_target_group for 80 and 443 ports and two http_tcp_listeners for these same ports.
But I have got this error message:
in resource "aws_lb_listener" "frontend_http_tcp":
│ 172: target_group_arn = each.value.arn
│ ├────────────────
│ │ each.value is map of string with 4 elements
│
│ This map does not have an element with the key "arn".
variable "aws_lb_target_group" {
description = "aws_lb_target_group"
type = map(any)
default = {
http = {
name = "http"
target_type = "instance"
port = 80
protocol = "TCP"
protocol_version = "HTTP1"
type = "source_ip"
enabled = false
path_health_check = "/health.html"
matcher_health_check = "200" # has to be HTTP 200 or fails
},
https = {
name = "https"
target_type = "instance"
port = 443
protocol = "TCP"
protocol_version = "HTTP2"
type = "source_ip"
enabled = false
path_health_check = "/health.html"
matcher_health_check = "200" # has to be HTTP 200 or fails
}
}
}
variable "http_tcp_listeners" {
description = "aws_lb_listener"
type = map(any)
default = {
http = {
port = "80"
protocol = "TCP"
action_type = "forward"
alpn_policy = "HTTP1Only"
},
https = {
port = "443"
protocol = "TCP"
action_type = "forward"
certificate_arn = "data.terraform_remote_state.acm.outputs.acm_certificate_arn"
alpn_policy = "HTTP2Preferred"
}
}
}
resource "aws_lb_target_group" "main" {
for_each = var.aws_lb_target_group
name = "test-group-${random_pet.this.id}-${each.value.name}"
target_type = each.value.target_type
port = each.value.port
protocol = each.value.protocol
protocol_version = each.value.protocol_version
vpc_id = local.vpc_id
stickiness {
type = "source_ip"
enabled = false
}
health_check {
path = each.value.path_health_check
port = each.value.port
healthy_threshold = 3
unhealthy_threshold = 3
interval = 30
}
depends_on = [
aws_lb.main,
]
}
resource "aws_lb_listener" "frontend_http_tcp" {
for_each = var.http_tcp_listeners
load_balancer_arn = aws_lb.main.arn
port = each.value.port
protocol = each.value.protocol
certificate_arn = data.terraform_remote_state.acm.outputs.acm_certificate_arn
alpn_policy = each.value.alpn_policy
dynamic "default_action" {
for_each = aws_lb_target_group.main
content {
type = "forward"
target_group_arn = each.value.arn
}
}
depends_on = [
aws_lb.main,
aws_lb_target_group.main,
]
}
When you use dynamic blocks, your key is not each, but the name of the block. So I think it should be:
target_group_arn = default_action.value.arn
To have only one default_action, try:
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.main[each.key].arn
}

create azure vm from custom image using terraform error

I need to provision a VMs in Azure from a Custom Image using Terraform, and everything works fine with image from the market place but when I try to specify a my custom image an error returns. I have been banging my head all day on this issue.
Here my tf script:
resource "azurerm_windows_virtual_machine" "tftest" {
name = "myazurevm"
location = "eastus"
resource_group_name = "myresource-rg"
network_interface_ids = [azurerm_network_interface.azvm1nic.id]
size = "Standard_B1s"
storage_image_reference {
id = "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Compute/images/mytemplate"
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_data_disk {
name = "my-data-disk"
managed_disk_type = "Premium_LRS"
disk_size_gb = 75
create_option = "FromImage"
lun = 0
}
os_profile {
computer_name = "myvmazure"
admin_username = "admin"
admin_password = "test123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
Here the error returned during plan phase:
2020-07-17T20:02:26.9367986Z ==============================================================================
2020-07-17T20:02:26.9368212Z Task : Terraform
2020-07-17T20:02:26.9368456Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2020-07-17T20:02:26.9368678Z Version : 0.0.142
2020-07-17T20:02:26.9368852Z Author : Microsoft Corporation
2020-07-17T20:02:26.9369049Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2020-07-17T20:02:26.9369262Z ==============================================================================
2020-07-17T20:02:27.2826725Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe providers
2020-07-17T20:02:27.5303002Z .
2020-07-17T20:02:27.5304176Z └── provider.azurerm
2020-07-17T20:02:27.5304628Z
2020-07-17T20:02:27.5363313Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe plan
2020-07-17T20:02:29.7685150Z [31m
2020-07-17T20:02:29.7788471Z [1m[31mError: [0m[0m[1mInsufficient os_disk blocks[0m
2020-07-17T20:02:29.7792789Z
2020-07-17T20:02:29.7793007Z [0m on line 0:
2020-07-17T20:02:29.7793199Z (source code not available)
2020-07-17T20:02:29.7793305Z
2020-07-17T20:02:29.7793472Z At least 1 "os_disk" blocks are required.
2020-07-17T20:02:29.7793660Z [0m[0m
2020-07-17T20:02:29.7793800Z [31m
2020-07-17T20:02:29.7793975Z [1m[31mError: [0m[0m[1mMissing required argument[0m
Do you have any suggestions to locate the issue?
I have finally figured out the issue. I was using the wrong terraform resource:
wrong --> azurerm_windows_virtual_machine
correct --> azurerm_virtual_machine
azurerm_windows_virtual_machine doesn't support arguments like (storage_os_disk, storage_data_disk) and is not the right one for custom images unless the image is publish in Shared Image Gallery.
See documentation for options supported from each provider:
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html
https://www.terraform.io/docs/providers/azurerm/r/windows_virtual_machine.html
first do it
https://learn.microsoft.com/pt-br/azure/virtual-machines/windows/upload-generalized-managed?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
than my all cod
resource "azurerm_resource_group" "example" {
name = "example-resources1"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network1"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal1"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "example-machine1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
vm_size = "Standard_B1s"
network_interface_ids = [
azurerm_network_interface.example.id,
]
storage_image_reference {
id = "/subscriptions/XXXXXXXXXXXXX/resourceGroups/ORIGEM/providers/Microsoft.Compute/images/myImage"
//just copi id from your image that you created
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvmazure"
admin_username = "adminusername"
admin_password = "testenovo#123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
//bellow the cod to call powershell o work extension,
resource "azurerm_virtual_machine_extension" "software" {
name = "install-software"
//resource_group_name = azurerm_resource_group.example.name
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
protected_settings = <<SETTINGS
{
"commandToExecute": "powershell -encodedCommand ${textencodebase64(file("install.ps1"), "UTF-16LE")}"
}
SETTINGS
}
You can use a custom image with "azurerm_windows_virtual_machine" module setting "source_image_id" parameter. Documentation note that "One of either source_image_id or source_image_reference must be set." One you can use for marketplace/gallery imagens and the other for managed images.

Resources