terraform.apply InvalidParameterException: The following supplied instance types do not exist: [m4.large] - amazon-ec2

I have the below cluster.tf file in my EC2 instance (type: t3.micro):
locals {
cluster_name = "my-eks-cluster"
}
module "vpc" {
source = "git::https://git#github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"
aws_region = "eu-north-1"
az_count = 3
aws_azs = "eu-north-1a, eu-north-1b, eu-north-1c"
global_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
}
module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v16.1.0"
cluster_name = local.cluster_name
cluster_version = "1.17"
vpc_id = module.vpc.aws_vpc_id
subnets = module.vpc.aws_subnet_private_prod_ids
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "t3.micro"
}
}
manage_aws_auth = false
}
But when I'm running the command terraform.apply I get this exception:
Error: error creating EKS Node Group (my-eks-cluster/my-eks-cluster-eks_nodes-divine-pig): InvalidParameterException: The following supplied instance types do not exist: [m4.large]
I tried to google it but couldn't find a solution for it...

I haven't worked previously before with AWS modules but in modules/node_groups on that GitHub repo link it looks like you may need to set node_group_defaults.
Reason why is that the If unset column for the instance type row says that the value in [var.workers_group_defaults[instance_type]] will be used.
That default value is located in the root local.tf and has a value of m4.large so maybe that instance type is not supported in your AWS region?
Not sure of how to fix this completely but may help with starting to troubleshoot.

Related

Getting terraform error: Error: "expected create_option to be one of [Attach Empty], got FromImage"

I am trying to create azure cyclecloud server using terraform , my code is below
note: remove the actual disk id ,vm id, public key etc
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "FromImage"
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
resource "azurerm_linux_virtual_machine" "res-0" {
admin_username = "cyclecloud"
location = "westus2"
name = "cc3"
network_interface_ids = network_interfaces_id"
resource_group_name = "myrg"
size = "Standard_DS1_v2"
admin_ssh_key {
public_key = "my_public_key"
username = "cyclecloud"
}
boot_diagnostics {
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
plan {
name = "cyclecloud-81"
product = "azure-cyclecloud"
publisher = "azurecyclecloud"
}
source_image_reference {
offer = "azure-cyclecloud"
publisher = "azurecyclecloud"
sku = "cyclecloud-81"
version = "latest"
}
}
while running :
terraform apply ,getting the below error:
Error: expected create_option to be one of [Attach Empty], got FromImage
with azurerm_virtual_machine_data_disk_attachment.res-0,
on main.tf line 3, in resource "azurerm_virtual_machine_data_disk_attachment" "res-0":
3: create_option = "FromImage"
Please assist
note i am using the below provider:
terraform {
backend "local" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.31.0"
}
}
}
provider "azurerm" {
features {}
}
The error clearly says that you are passing an unsupported value to the create_option attribute of resource azurerm_virtual_machine_data_disk_attachment. The possible values are Empty or Attach.
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "Empty" ## or ## "Attach" # <- Choose any of these values.
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
Refer to attribute description:
create_option - (Optional) The Create Option of the Data Disk, such as Empty or Attach. Defaults to Attach. Changing this forces a new resource to be created.
Official Documentation: https://registry.terraform.io/providers/hashicorp/azurerm/3.31.0/docs/resources/virtual_machine_data_disk_attachment#create_option
After this may be network_interface_ids = network_interfaces_id" this might result in an error if no real network_interface_ids are provided as the attribute values.
Refer to official hashicorp example : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine
Not Sure but if the goal is to attach an additional data disk to the machine then azurerm_managed_disk
resource is also required not only just attachment and there you can use create_option = FromImage attribute value.
create_option for the azurerm_managed_disk : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/managed_disk#create_option

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Iterating network interfaces in vsphere provider with Terraform

Question: How can I iterate through a nested map to assign string values for a data resource block?
Context:
Working on a requirement to deploy multiple VMs via OVA template using the vsphere provider 2.0 on terraform.
As the network interfaces will vary according to the environment, the OVA template will only include the "global" network interface common to all VMs in any environment.
I am using the vsphere_network data resource to retrieve the distributed virtual port group ID for each network interface being assigned to the VMs.
Currently stuck on a variable interpolation to iterate through this info to assign to each vm resource in terraform.
1 vsphere network data block to iterate all DVPG ids, and 1 vm resource block to deploy all vms with those DVPGs using dynamic network interface block
VM Configuration Variable:
variable "vmconfig" {
description = "Map of VM name => Configs "
type = map(object({
name = string
cpus = number
memory = number
folder = string
remote_ovf = string
netint = map(string)
}))
default = {}
}
.tfvars:
vmconfig = {
"vm1" = {
name = "vm1"
cpus = 4
memory = 16384
folder = "foo/bary"
remote_ovf = "foo.bar.ova"
netint = {
nic1 = "segment1",
nic2 = "segment2",
nic3 = "segment3",
nic4 = "segment4"
}
},
"vm2" = {...}, etc.
Calling the variable above into a local var:
locals {
vm_values = { for name, config in var.vmconfig : name => {
vm_name = config.name
num_cpus = config.cpus
memory = config.memory
folder = config.folder
remote_ovf_url = config.remote_ovf
netint = config.netint
}
}
}
Trying to iterate through each value for netint inside the data resource block using for_each instead of count(listed as best practices for the vm type being deployed):
data "vsphere_network" "nicint" {
for_each = local.vm_values
name = each.value.netint
datacenter_id = data.vsphere_datacenter.dc.id
}
This data resource block is then called inside the VM resource block using dynamic:
resource "vsphere_virtual_machine" "vm" {
.
.
.
dynamic "network_interface" {
for_each = data.vsphere_network.nicint
content {
network_id = network_interface.value.id
}
}
}
The issue I'm having is iterating through each value inside netint, I get the inkling that I might be missing something trivial here, would appreciate your support in defining that for_each iteration accurately so that multiple vsphere_network data sources are available programmatically using that one data block.
I have tried the following variations for iterating in the data block:
data "vsphere_network" "nicint" {
for_each = {for k,v in local.vm_values : k => v.netint}
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Error I get is:
Inappropriate value for attribute "name": string required
each.value is a map of string with 4 elements
I tried using merge, it works! BUT it ended up creating duplicates for each VM and wouldn't modify an existing resource, but destroy and create another.
Another local variable created to map the network interface segments:
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Dear Hivemind, please guide me to optimize this effectively - thank you!!
Your merge is correct. So I just post it here for reference:
locals {
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
}
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
I think the issue is with your dynamic block. namely, instead of for_each = data.vsphere_network.nicint you should iterate over nicint from your variable, not data source.
resource "vsphere_virtual_machine" "vm" {
for_each = var.vmconfig
#...
dynamic "network_interface" {
for_each = toset(each.value.netint)
content {
network_id = data.vsphere_network.nicint["${each.key}-${network_interface.key}"].id
}
}
}

How can I pass aws ec2 private ips to template file Using terraform

I am fairly new to terraform. I am trying to create number of ec2 instances and want to run bootstrap script once instance is ready, and in bootstrap script I need to pass private ips of instance created ( part of terraform script).
After so much of googling I came to know that I have to use terraform_file but, not able to use it correctly.
Terraform version: 0.11.13
/* Security group for the instances */
resource "aws_security_group" "test-reporting-sg"
{
name = "${var.environment}-test-reporting-sg"
vpc_id = "${var.vpc_id}"
}
data "template_file" "init"
{
count = "${var.instance_count}"
template = "${file("${path.module}/wrapperscript.sh")}"
vars = {
ip_addresses= "${aws_instance.dev-testreporting.*.private_ip}"
}
}
resource "aws_instance" "dev-testreporting"
{
count = "${var.instance_count}"
ami="${var.ami_id}"
instance_type ="${var.instance_type}"
key_name ="${var.key_name}"
security_groups = [ "${aws_security_group.test-reporting-sg.id}" ]
subnet_id = "${var.subnet_ids}"
user_data = "${data.template_file.init.*.rendered.[count.index]}"
}
Thanks
In resource module you can add private_ip like below
private_ip = "${lookup(var.private_ips,count.index)}"
and add private_ip values to variable file

Terraform: Cannot create spot instance. Error: MaxSpotInstanceCountExceeded

I am trying to create a spot instance in Terraform and the terraform code appears to be fine but I keep getting an error back saying MaxSpotInstanceCountExceeded.
NOTE: Right now this is just a test hence I am not including security groups, IPs, etc etc.
Steps I have taken:
Checked that I have 0 spot instance requests created in console.
Tried logging in to the console and creating a spot instance request. It works just fine.
Cancelled the spot instance request to ensure that I now have 0 spot instance requests.
Now I try and create virtually the same spot instance with the terraform script below, but I get the error: MaxSpotInstanceCountExceeded
Does anyone know why Terraform (or maybe AWS?) is not allowing me to create the spot instance using the terraform script, but it works just fine from the console?
Thanks!
provider "aws" {
profile = "terraform_enterprise_user"
region = "us-east-2"
}
resource "aws_spot_instance_request" "MySpotInstance" {
# Spot Request Settings
wait_for_fulfillment = "true"
spot_type = "persistent"
instance_interruption_behaviour = "stop"
# Instance Settings
ami = "ami-0520e698dd500b1d1"
instance_type = "c4.large"
associate_public_ip_address = "1"
root_block_device {
volume_size = "10"
volume_type = "standard"
}
ebs_block_device {
device_name = "/dev/sdb"
volume_size = "50"
volume_type = "standard"
delete_on_termination = "true"
}
tags = {
Name = "MySpotInstance"
Application = "MyApp"
Environment = "TEST"
}
}

Resources