Iterating network interfaces in vsphere provider with Terraform - for-loop

Question: How can I iterate through a nested map to assign string values for a data resource block?
Context:
Working on a requirement to deploy multiple VMs via OVA template using the vsphere provider 2.0 on terraform.
As the network interfaces will vary according to the environment, the OVA template will only include the "global" network interface common to all VMs in any environment.
I am using the vsphere_network data resource to retrieve the distributed virtual port group ID for each network interface being assigned to the VMs.
Currently stuck on a variable interpolation to iterate through this info to assign to each vm resource in terraform.
1 vsphere network data block to iterate all DVPG ids, and 1 vm resource block to deploy all vms with those DVPGs using dynamic network interface block
VM Configuration Variable:
variable "vmconfig" {
description = "Map of VM name => Configs "
type = map(object({
name = string
cpus = number
memory = number
folder = string
remote_ovf = string
netint = map(string)
}))
default = {}
}
.tfvars:
vmconfig = {
"vm1" = {
name = "vm1"
cpus = 4
memory = 16384
folder = "foo/bary"
remote_ovf = "foo.bar.ova"
netint = {
nic1 = "segment1",
nic2 = "segment2",
nic3 = "segment3",
nic4 = "segment4"
}
},
"vm2" = {...}, etc.
Calling the variable above into a local var:
locals {
vm_values = { for name, config in var.vmconfig : name => {
vm_name = config.name
num_cpus = config.cpus
memory = config.memory
folder = config.folder
remote_ovf_url = config.remote_ovf
netint = config.netint
}
}
}
Trying to iterate through each value for netint inside the data resource block using for_each instead of count(listed as best practices for the vm type being deployed):
data "vsphere_network" "nicint" {
for_each = local.vm_values
name = each.value.netint
datacenter_id = data.vsphere_datacenter.dc.id
}
This data resource block is then called inside the VM resource block using dynamic:
resource "vsphere_virtual_machine" "vm" {
.
.
.
dynamic "network_interface" {
for_each = data.vsphere_network.nicint
content {
network_id = network_interface.value.id
}
}
}
The issue I'm having is iterating through each value inside netint, I get the inkling that I might be missing something trivial here, would appreciate your support in defining that for_each iteration accurately so that multiple vsphere_network data sources are available programmatically using that one data block.
I have tried the following variations for iterating in the data block:
data "vsphere_network" "nicint" {
for_each = {for k,v in local.vm_values : k => v.netint}
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Error I get is:
Inappropriate value for attribute "name": string required
each.value is a map of string with 4 elements
I tried using merge, it works! BUT it ended up creating duplicates for each VM and wouldn't modify an existing resource, but destroy and create another.
Another local variable created to map the network interface segments:
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Dear Hivemind, please guide me to optimize this effectively - thank you!!

Your merge is correct. So I just post it here for reference:
locals {
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
}
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
I think the issue is with your dynamic block. namely, instead of for_each = data.vsphere_network.nicint you should iterate over nicint from your variable, not data source.
resource "vsphere_virtual_machine" "vm" {
for_each = var.vmconfig
#...
dynamic "network_interface" {
for_each = toset(each.value.netint)
content {
network_id = data.vsphere_network.nicint["${each.key}-${network_interface.key}"].id
}
}
}

Related

Getting terraform error: Error: "expected create_option to be one of [Attach Empty], got FromImage"

I am trying to create azure cyclecloud server using terraform , my code is below
note: remove the actual disk id ,vm id, public key etc
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "FromImage"
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
resource "azurerm_linux_virtual_machine" "res-0" {
admin_username = "cyclecloud"
location = "westus2"
name = "cc3"
network_interface_ids = network_interfaces_id"
resource_group_name = "myrg"
size = "Standard_DS1_v2"
admin_ssh_key {
public_key = "my_public_key"
username = "cyclecloud"
}
boot_diagnostics {
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
plan {
name = "cyclecloud-81"
product = "azure-cyclecloud"
publisher = "azurecyclecloud"
}
source_image_reference {
offer = "azure-cyclecloud"
publisher = "azurecyclecloud"
sku = "cyclecloud-81"
version = "latest"
}
}
while running :
terraform apply ,getting the below error:
Error: expected create_option to be one of [Attach Empty], got FromImage
with azurerm_virtual_machine_data_disk_attachment.res-0,
on main.tf line 3, in resource "azurerm_virtual_machine_data_disk_attachment" "res-0":
3: create_option = "FromImage"
Please assist
note i am using the below provider:
terraform {
backend "local" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.31.0"
}
}
}
provider "azurerm" {
features {}
}
The error clearly says that you are passing an unsupported value to the create_option attribute of resource azurerm_virtual_machine_data_disk_attachment. The possible values are Empty or Attach.
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "Empty" ## or ## "Attach" # <- Choose any of these values.
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
Refer to attribute description:
create_option - (Optional) The Create Option of the Data Disk, such as Empty or Attach. Defaults to Attach. Changing this forces a new resource to be created.
Official Documentation: https://registry.terraform.io/providers/hashicorp/azurerm/3.31.0/docs/resources/virtual_machine_data_disk_attachment#create_option
After this may be network_interface_ids = network_interfaces_id" this might result in an error if no real network_interface_ids are provided as the attribute values.
Refer to official hashicorp example : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine
Not Sure but if the goal is to attach an additional data disk to the machine then azurerm_managed_disk
resource is also required not only just attachment and there you can use create_option = FromImage attribute value.
create_option for the azurerm_managed_disk : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/managed_disk#create_option

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Terraform create multiple ec2 instances in multiple subnets

I am trying to create multiple ec2 instances with access to multiple subnets.
I've found questions and answers on doing these things individually, but not combined.
First, I create a private and a public subnet, then I setup a local to store the IDs once they are created:
locals {
subnets = [ aws_subnet.public_subnet.id, aws_subnet.private_subnet.id ]
}
Next, I can create a variable number of servers in the private_subnet using for_each and the below:
servers = [ "s1", "s2" ]
resource "aws_instance" "system" {
for_each = var.servers
ami = var.aws_ami
instance_type = var.instance_type
#subnet_id = aws_subnet.private_subnet.id
count = 2
subnet_id = element(local.subs, count.index)
}
What I want to have, is that the server can access both subnets (it doesn't exist as far as I can tell, but the equivalent of subnet_ids = [aws_subnet.public_subnet.id, aws_subnet.private_subnet.id]).
I found a nice answer which works for a specific instance by creating two NICs (Terraform one EC2 instance with two subnets), however I need to do this var.servers times so it's difficult to hardcode the var.servers * 2 NICs with my current aws_instance setup (and I trip up when combining for_each and count).
Can someone please point me in the right direction?
Two create multiple servers (in your case 4 in total) in private (two servers) and public (two servers) subnets you can use count:
resource "aws_instance" "system" {
count = length(var.servers) * length(local.subnets)
ami = var.aws_ami
instance_type = var.instance_type
subnet_id = element(local.subnets, count.index)
}
For those looking to have a similar setup there are a few steps (assuming subnets and route tables already exist):
Create a machine on a single subnet
Create an additional network interface
Attach the network interface to the 'other' subnet (for the existing machine)
Create a variable for machines to create:
domains = [
"asd.com",
"asd2.com"
]
Create the machines on a single subnet:
resource "aws_instance" "domain" {
for_each = var.domains
ami = var.aws_ami
subnet_id = aws_subnet.public_subnet.id
associate_public_ip_address = true
tags = {
Name = "Instance - ${each.key}"
}
}
Create the additional interfaces for the 'other' subnet:
resource "aws_network_interface" "nics" {
for_each = var.domains
subnet_id = aws_subnet.private_subnet.id
tags = {
Name = "NIC - ${each.key}"
}
}
Attach the network interfaces to the 'other' subnet (for the existing machine):
resource "aws_network_interface_attachment" "attach_nics" {
for_each = var.domains
instance_id = aws_instance.domain[each.key].id
network_interface_id = aws_network_interface.nics[each.key].id
device_index = 1 # public_subnet = 0
}
The 'trick' here (that I didn't know) is understanding that you can access data from created resources based on their names in the existing script (which is used in the aws_network_interface_attachment component).

terraform.apply InvalidParameterException: The following supplied instance types do not exist: [m4.large]

I have the below cluster.tf file in my EC2 instance (type: t3.micro):
locals {
cluster_name = "my-eks-cluster"
}
module "vpc" {
source = "git::https://git#github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"
aws_region = "eu-north-1"
az_count = 3
aws_azs = "eu-north-1a, eu-north-1b, eu-north-1c"
global_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
}
module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v16.1.0"
cluster_name = local.cluster_name
cluster_version = "1.17"
vpc_id = module.vpc.aws_vpc_id
subnets = module.vpc.aws_subnet_private_prod_ids
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "t3.micro"
}
}
manage_aws_auth = false
}
But when I'm running the command terraform.apply I get this exception:
Error: error creating EKS Node Group (my-eks-cluster/my-eks-cluster-eks_nodes-divine-pig): InvalidParameterException: The following supplied instance types do not exist: [m4.large]
I tried to google it but couldn't find a solution for it...
I haven't worked previously before with AWS modules but in modules/node_groups on that GitHub repo link it looks like you may need to set node_group_defaults.
Reason why is that the If unset column for the instance type row says that the value in [var.workers_group_defaults[instance_type]] will be used.
That default value is located in the root local.tf and has a value of m4.large so maybe that instance type is not supported in your AWS region?
Not sure of how to fix this completely but may help with starting to troubleshoot.

Create random variable in Terraform and pass it to GCE startup script

I want to run a metadata_startup_script when using Terraform to create a GCE instance.
This script is supposed to create a user and assign to this user a random password.
I know that I can create a random string in Terraform with something like:
resource "random_string" "pass" {
length = 20
}
And my startup.sh will at some point be like:
echo myuser:${PSSWD} | chpasswd
How can I chain the random_string resource generation with the appropriate script invocation through the metadata_startup_script parameter?
Here is the google_compute_instance resource definition:
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${file("./startup.sh")}"
}
where startup.sh includes the above line setting the password non-interactively.
If you want to pass a Terraform variable into a templated file then you need to use a template.
In Terraform <0.12 you'll want to use the template_file data source like this:
resource "random_string" "pass" {
length = 20
}
data "template_file" "init" {
template = "${file("./startup.sh")}"
vars = {
password = "${random_string.pass.result}"
}
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${data.template_file.startup_script.rendered}"
}
and change your startup.sh script to be:
echo myuser:${password} | chpasswd
Note that the template uses ${} for interpolation of variables that Terraform is passing into the script. If you need to use $ anywhere else in your script then you'll need to escape it by using $$ to get a literal $ in your rendered script.
In Terraform 0.12+ there is the new templatefile function which can be used instead of the template_file data source if you'd prefer:
resource "random_string" "pass" {
length = 20
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = templatefile("./startup.sh", {password = random_string.pass.result})
}
As an aside you should also notice the warning on random_string:
This resource does use a cryptographic random number generator.
Historically this resource's intended usage has been ambiguous as the original example used it in a password. For backwards compatibility it will continue to exist. For unique ids please use random_id, for sensitive random values please use random_password.
As such you should instead use the random_password resource:
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}

Resources