Getting terraform error: Error: "expected create_option to be one of [Attach Empty], got FromImage" - azure-cyclecloud

I am trying to create azure cyclecloud server using terraform , my code is below
note: remove the actual disk id ,vm id, public key etc
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "FromImage"
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
resource "azurerm_linux_virtual_machine" "res-0" {
admin_username = "cyclecloud"
location = "westus2"
name = "cc3"
network_interface_ids = network_interfaces_id"
resource_group_name = "myrg"
size = "Standard_DS1_v2"
admin_ssh_key {
public_key = "my_public_key"
username = "cyclecloud"
}
boot_diagnostics {
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
plan {
name = "cyclecloud-81"
product = "azure-cyclecloud"
publisher = "azurecyclecloud"
}
source_image_reference {
offer = "azure-cyclecloud"
publisher = "azurecyclecloud"
sku = "cyclecloud-81"
version = "latest"
}
}
while running :
terraform apply ,getting the below error:
Error: expected create_option to be one of [Attach Empty], got FromImage
with azurerm_virtual_machine_data_disk_attachment.res-0,
on main.tf line 3, in resource "azurerm_virtual_machine_data_disk_attachment" "res-0":
3: create_option = "FromImage"
Please assist
note i am using the below provider:
terraform {
backend "local" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.31.0"
}
}
}
provider "azurerm" {
features {}
}

The error clearly says that you are passing an unsupported value to the create_option attribute of resource azurerm_virtual_machine_data_disk_attachment. The possible values are Empty or Attach.
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "Empty" ## or ## "Attach" # <- Choose any of these values.
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
Refer to attribute description:
create_option - (Optional) The Create Option of the Data Disk, such as Empty or Attach. Defaults to Attach. Changing this forces a new resource to be created.
Official Documentation: https://registry.terraform.io/providers/hashicorp/azurerm/3.31.0/docs/resources/virtual_machine_data_disk_attachment#create_option
After this may be network_interface_ids = network_interfaces_id" this might result in an error if no real network_interface_ids are provided as the attribute values.
Refer to official hashicorp example : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine
Not Sure but if the goal is to attach an additional data disk to the machine then azurerm_managed_disk
resource is also required not only just attachment and there you can use create_option = FromImage attribute value.
create_option for the azurerm_managed_disk : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/managed_disk#create_option

Related

Terraform plugin crashed when provisioning the Azure Local Network Gateway

I am trying ti provision an azure local network gateway. When I try to terraform apply I get the following error:
module.local_gateway.azurerm_local_network_gateway.local_gw: Creating...
╷
│ Error: Plugin did not respond
│
│ with module.local_gateway.azurerm_local_network_gateway.local_gw,
│ on modules/local-gateway/main.tf line 6, in resource "azurerm_local_network_gateway" "local_gw":
│ 6: resource "azurerm_local_network_gateway" "local_gw" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-azurerm_v3.0.0_x5 plugin:
panic: interface conversion: interface {} is nil, not string
goroutine 104 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.expandLocalNetworkGatewayAddressSpaces(0x14001f87f00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:271 +0x234
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.resourceLocalNetworkGatewayCreateUpdate(0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:160 +0xa5c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:329 +0x170
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001a63ba0, 0x14001f87d80, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:467 +0x8d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x140004fa750, {0x108ae8b78, 0x14001cff880}, 0x14001d12dc0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go:977 +0xe38
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000237880, {0x108ae8c20, 0x14002009e30}, 0x14001c1ee00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go:603 +0x338
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x10864d540, 0x14000237880}, {0x108ae8c20, 0x14002009e30}, 0x14001a51020, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x1400159c2a0, 0x10d0d0f40, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1292 +0xc04
google.golang.org/grpc.(*Server).handleStream(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1617 +0xa34
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x1400156d0e0, 0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:940 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform->provider-azurerm/vendor/google.golang.org/grpc/server.go:938 +0x1f0
Error: The terraform-provider-azurerm_v3.0.0_x5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
And here's my local_gw.tf code:
resource "azurerm_local_network_gateway" "local_gw" {
name = var.azurerm_local_network_gateway_name
location = var.location
resource_group_name = var.rg_name
gateway_address = var.gateway_address
address_space = var.local_gw_address_space # The gateway IP address to connect with
tags = merge(var.common_tags)
}
This is where it is being called as a module in main.tf
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
# Local Gateway
module "local_gateway" {
source = "./modules/local-gateway"
location = local.resource_location
rg_name = var.rg_name
azurerm_local_network_gateway_name = var.azurerm_local_network_gateway_name
gateway_address = var.gateway_address
local_gw_address_space = var.local_gw_address_space
common_tags = merge(
local.common_tags,
{
"Name" = "${local.project}-${var.azurerm_local_network_gateway_name}"
},
)
}
This is my provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
backend "azurerm" {
resource_group_name = "shared-resources"
storage_account_name = "janasvtfstate"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
Can someone help me fix this?
The multiple declarations of module “local_gateway” caused this problem. here is no need to declare the items again in the main TF file. As shown below, simply declaring a module suffices.
module "local_gateway" {
source = "./modules/local_gw/"
}
Variables are defined directly on the code in the updated code snippet below.
Step1:
main tf code as follows
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
module "local_gateway" {
source = "./modules/local_gw/"
}
Module ->local_gw->local_gw tf code as follows
resource "azurerm_local_network_gateway" "local_gw" {
name = "localgatewayswarnademo"
location = "Germany West Central"
resource_group_name = "rg-swarnademonew"
gateway_address = "12.13.14.15"
address_space = ["10.0.0.0/16"]
tags = merge("demo")
}
Provider tf file code as
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
# backend "azurerm" {
# resource_group_name = "shared-resources"
# storage_account_name = "janasvtfstate"
# container_name = "tfstate"
# key = "terraform.tfstate"
# }
}
provider "azurerm" {
features {}
}
Note: Commented on the storage account. If required, create it manually.
Step2: run below commands
terraform plan
plan output
terraform apply -auto-approve
Apply output
Verification from portal.
Output:

Iterating network interfaces in vsphere provider with Terraform

Question: How can I iterate through a nested map to assign string values for a data resource block?
Context:
Working on a requirement to deploy multiple VMs via OVA template using the vsphere provider 2.0 on terraform.
As the network interfaces will vary according to the environment, the OVA template will only include the "global" network interface common to all VMs in any environment.
I am using the vsphere_network data resource to retrieve the distributed virtual port group ID for each network interface being assigned to the VMs.
Currently stuck on a variable interpolation to iterate through this info to assign to each vm resource in terraform.
1 vsphere network data block to iterate all DVPG ids, and 1 vm resource block to deploy all vms with those DVPGs using dynamic network interface block
VM Configuration Variable:
variable "vmconfig" {
description = "Map of VM name => Configs "
type = map(object({
name = string
cpus = number
memory = number
folder = string
remote_ovf = string
netint = map(string)
}))
default = {}
}
.tfvars:
vmconfig = {
"vm1" = {
name = "vm1"
cpus = 4
memory = 16384
folder = "foo/bary"
remote_ovf = "foo.bar.ova"
netint = {
nic1 = "segment1",
nic2 = "segment2",
nic3 = "segment3",
nic4 = "segment4"
}
},
"vm2" = {...}, etc.
Calling the variable above into a local var:
locals {
vm_values = { for name, config in var.vmconfig : name => {
vm_name = config.name
num_cpus = config.cpus
memory = config.memory
folder = config.folder
remote_ovf_url = config.remote_ovf
netint = config.netint
}
}
}
Trying to iterate through each value for netint inside the data resource block using for_each instead of count(listed as best practices for the vm type being deployed):
data "vsphere_network" "nicint" {
for_each = local.vm_values
name = each.value.netint
datacenter_id = data.vsphere_datacenter.dc.id
}
This data resource block is then called inside the VM resource block using dynamic:
resource "vsphere_virtual_machine" "vm" {
.
.
.
dynamic "network_interface" {
for_each = data.vsphere_network.nicint
content {
network_id = network_interface.value.id
}
}
}
The issue I'm having is iterating through each value inside netint, I get the inkling that I might be missing something trivial here, would appreciate your support in defining that for_each iteration accurately so that multiple vsphere_network data sources are available programmatically using that one data block.
I have tried the following variations for iterating in the data block:
data "vsphere_network" "nicint" {
for_each = {for k,v in local.vm_values : k => v.netint}
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Error I get is:
Inappropriate value for attribute "name": string required
each.value is a map of string with 4 elements
I tried using merge, it works! BUT it ended up creating duplicates for each VM and wouldn't modify an existing resource, but destroy and create another.
Another local variable created to map the network interface segments:
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Dear Hivemind, please guide me to optimize this effectively - thank you!!
Your merge is correct. So I just post it here for reference:
locals {
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
}
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
I think the issue is with your dynamic block. namely, instead of for_each = data.vsphere_network.nicint you should iterate over nicint from your variable, not data source.
resource "vsphere_virtual_machine" "vm" {
for_each = var.vmconfig
#...
dynamic "network_interface" {
for_each = toset(each.value.netint)
content {
network_id = data.vsphere_network.nicint["${each.key}-${network_interface.key}"].id
}
}
}

OCI: How to get in terraform OKE prepared images?

all
I want to select automatically the image for the node in kubernetes nodepool when I select Shape, Operating System and Version. For this, I have this datasource
data "oci_core_images" "images" {
#Required
compartment_id = var.cluster_compartment
#Optional
# display_name = var.image_display_name
operating_system = var.cluster_node_image_operating_system
operating_system_version = var.cluster_node_image_operating_system_version
shape = var.cluster_node_shape
state = "AVAILABLE"
# sort_by = var.image_sort_by
# sort_order = var.image_sort_order
}
and I select the image in oci_containerengine_node_poolas
resource "oci_containerengine_node_pool" "node_pool01" {
# ...
node_shape = var.cluster_node_shape
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = data.oci_core_images.images.images[0].id
source_type = "IMAGE"
}
}
But my problem seems to be that not all images are prepared for OKE (with the OKE software install in cloudinit).
So the documentation suggest to use the oci cli command:
oci ce node-pool-options get --node-pool-option-id all
And my question is: How can I do this in data in terraform (recover only OKE ready images)
You can use the oci_containerengine_node_pool_option
data "oci_containerengine_node_pool_option" "test_node_pool_option" {
#Required
node_pool_option_id = oci_containerengine_node_pool_option.test_node_pool_option.id
#Optional
compartment_id = var.compartment_id
}
Ref doc : https://registry.terraform.io/providers/oracle/oci/latest/docs/data-sources/containerengine_node_pool_option
Github issue : https://github.com/oracle-terraform-modules/terraform-oci-oke/issues/263
Change log release details : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/CHANGELOG.adoc#310-april-6-2021

terraform.apply InvalidParameterException: The following supplied instance types do not exist: [m4.large]

I have the below cluster.tf file in my EC2 instance (type: t3.micro):
locals {
cluster_name = "my-eks-cluster"
}
module "vpc" {
source = "git::https://git#github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"
aws_region = "eu-north-1"
az_count = 3
aws_azs = "eu-north-1a, eu-north-1b, eu-north-1c"
global_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
}
module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v16.1.0"
cluster_name = local.cluster_name
cluster_version = "1.17"
vpc_id = module.vpc.aws_vpc_id
subnets = module.vpc.aws_subnet_private_prod_ids
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "t3.micro"
}
}
manage_aws_auth = false
}
But when I'm running the command terraform.apply I get this exception:
Error: error creating EKS Node Group (my-eks-cluster/my-eks-cluster-eks_nodes-divine-pig): InvalidParameterException: The following supplied instance types do not exist: [m4.large]
I tried to google it but couldn't find a solution for it...
I haven't worked previously before with AWS modules but in modules/node_groups on that GitHub repo link it looks like you may need to set node_group_defaults.
Reason why is that the If unset column for the instance type row says that the value in [var.workers_group_defaults[instance_type]] will be used.
That default value is located in the root local.tf and has a value of m4.large so maybe that instance type is not supported in your AWS region?
Not sure of how to fix this completely but may help with starting to troubleshoot.

Create random variable in Terraform and pass it to GCE startup script

I want to run a metadata_startup_script when using Terraform to create a GCE instance.
This script is supposed to create a user and assign to this user a random password.
I know that I can create a random string in Terraform with something like:
resource "random_string" "pass" {
length = 20
}
And my startup.sh will at some point be like:
echo myuser:${PSSWD} | chpasswd
How can I chain the random_string resource generation with the appropriate script invocation through the metadata_startup_script parameter?
Here is the google_compute_instance resource definition:
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${file("./startup.sh")}"
}
where startup.sh includes the above line setting the password non-interactively.
If you want to pass a Terraform variable into a templated file then you need to use a template.
In Terraform <0.12 you'll want to use the template_file data source like this:
resource "random_string" "pass" {
length = 20
}
data "template_file" "init" {
template = "${file("./startup.sh")}"
vars = {
password = "${random_string.pass.result}"
}
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${data.template_file.startup_script.rendered}"
}
and change your startup.sh script to be:
echo myuser:${password} | chpasswd
Note that the template uses ${} for interpolation of variables that Terraform is passing into the script. If you need to use $ anywhere else in your script then you'll need to escape it by using $$ to get a literal $ in your rendered script.
In Terraform 0.12+ there is the new templatefile function which can be used instead of the template_file data source if you'd prefer:
resource "random_string" "pass" {
length = 20
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = templatefile("./startup.sh", {password = random_string.pass.result})
}
As an aside you should also notice the warning on random_string:
This resource does use a cryptographic random number generator.
Historically this resource's intended usage has been ambiguous as the original example used it in a password. For backwards compatibility it will continue to exist. For unique ids please use random_id, for sensitive random values please use random_password.
As such you should instead use the random_password resource:
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}

Resources