how to add an additional loop over a stringlist within a for_each - for-loop

i've setted up multiple github repos within my .tf configuration with a simple for_each on "github" repository resource. In addition i tried to create several branches (a branch per env) for each newly created repo.
My first intention was to use a module (./modules/github_repo/repo.tf) which includes
locals {
environments = var.environments
}
resource "github_branch" "branches" {
for_each = toset(setsubtract(local.environments, ["prod"]))
repository = "orgName/${var.REPO_NAME}"
branch = lookup(var.environment_to_branch_map, each.key, each.key)
source_branch = "master"
}
with following variables
variable "REPO_NAME" {
type = string
}
variable "environments" {
type = list(string)
}
variable "environment_to_branch_map" {
type = map(any)
default = {
"prod" = "master"
"dev" = "develop"
}
calling like this from main.tf
provider "github" {
token = var.GITHUB_TOKEN
owner = "orgName"
}
locals {
environments = ["dev", "prod", "staging", "test"]
microServices = tomap({ "service1" : "fnApp", "service2" : "fnApp" })
default_branch = "master"
}
module "branches_per_microservice" {
for_each = local.microServices
source = "./modules/github_repo"
REPO_NAME = each.key
environments = local.environments
depends_on = [github_repository.microservices]
}
unfortunately i get an 404 for each branch and repo combination like this
Error: Error querying GitHub branch reference /orgName/service1 (refs/heads/master): GET
https://api.github.com/repos//orgName/service1/git/ref/heads/master: 404 Not Found []
with
module.branches_per_microservice["service1"].github_branch.branches["test"]
on modules/github_repo/repo.tf line 23, in resource "github_branch" "branches":
i guess it's a "provider" thing, cause if i try to create a branch directly in the main.tf, it will work. but the problem is, that i only can use one loop within a resource. (i already know that providers are not possible in modules with count or for_each loops, as written in terraform docs)
resource "github_branch" "branches" {
for_each = toset(setsubtract(local.environments, ["prod"]))
repository = github_repository.microservices["service1"].name
branch = lookup(var.environment_to_branch_map, each.key, each.key)
source_branch = "master"
}
in this case, i have to create a resource for each "MicroService" manually, which i want to avoid heavily... Are there any ideas how i could "nest" the second loop over the environments to create my branches for each Micorservice repos?
Many thx in advance for any hint, idea or approach here...

Nested loop can be replaced with a single loop over the setproduct of two sets. The documentation for setproduct can be found here
https://www.terraform.io/language/functions/setproduct

Related

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Iterating network interfaces in vsphere provider with Terraform

Question: How can I iterate through a nested map to assign string values for a data resource block?
Context:
Working on a requirement to deploy multiple VMs via OVA template using the vsphere provider 2.0 on terraform.
As the network interfaces will vary according to the environment, the OVA template will only include the "global" network interface common to all VMs in any environment.
I am using the vsphere_network data resource to retrieve the distributed virtual port group ID for each network interface being assigned to the VMs.
Currently stuck on a variable interpolation to iterate through this info to assign to each vm resource in terraform.
1 vsphere network data block to iterate all DVPG ids, and 1 vm resource block to deploy all vms with those DVPGs using dynamic network interface block
VM Configuration Variable:
variable "vmconfig" {
description = "Map of VM name => Configs "
type = map(object({
name = string
cpus = number
memory = number
folder = string
remote_ovf = string
netint = map(string)
}))
default = {}
}
.tfvars:
vmconfig = {
"vm1" = {
name = "vm1"
cpus = 4
memory = 16384
folder = "foo/bary"
remote_ovf = "foo.bar.ova"
netint = {
nic1 = "segment1",
nic2 = "segment2",
nic3 = "segment3",
nic4 = "segment4"
}
},
"vm2" = {...}, etc.
Calling the variable above into a local var:
locals {
vm_values = { for name, config in var.vmconfig : name => {
vm_name = config.name
num_cpus = config.cpus
memory = config.memory
folder = config.folder
remote_ovf_url = config.remote_ovf
netint = config.netint
}
}
}
Trying to iterate through each value for netint inside the data resource block using for_each instead of count(listed as best practices for the vm type being deployed):
data "vsphere_network" "nicint" {
for_each = local.vm_values
name = each.value.netint
datacenter_id = data.vsphere_datacenter.dc.id
}
This data resource block is then called inside the VM resource block using dynamic:
resource "vsphere_virtual_machine" "vm" {
.
.
.
dynamic "network_interface" {
for_each = data.vsphere_network.nicint
content {
network_id = network_interface.value.id
}
}
}
The issue I'm having is iterating through each value inside netint, I get the inkling that I might be missing something trivial here, would appreciate your support in defining that for_each iteration accurately so that multiple vsphere_network data sources are available programmatically using that one data block.
I have tried the following variations for iterating in the data block:
data "vsphere_network" "nicint" {
for_each = {for k,v in local.vm_values : k => v.netint}
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Error I get is:
Inappropriate value for attribute "name": string required
each.value is a map of string with 4 elements
I tried using merge, it works! BUT it ended up creating duplicates for each VM and wouldn't modify an existing resource, but destroy and create another.
Another local variable created to map the network interface segments:
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
Dear Hivemind, please guide me to optimize this effectively - thank you!!
Your merge is correct. So I just post it here for reference:
locals {
netint_map = merge([
for vmtype, values in var.vmconfig:
{
for netint in values.netint:
"${vmtype}-${netint}" => {vmtype = vmtype, netint = netint}
}
]...)
}
data "vsphere_network" "nicint" {
for_each = local.netint_map
name = each.value
datacenter_id = data.vsphere_datacenter.dc.id
}
I think the issue is with your dynamic block. namely, instead of for_each = data.vsphere_network.nicint you should iterate over nicint from your variable, not data source.
resource "vsphere_virtual_machine" "vm" {
for_each = var.vmconfig
#...
dynamic "network_interface" {
for_each = toset(each.value.netint)
content {
network_id = data.vsphere_network.nicint["${each.key}-${network_interface.key}"].id
}
}
}

UPDATED - Terraform OCI - create multiple VCN in different regions

I would like to create 2 VCN and other resources inside two or more regions.
I upload my code inside this github account
When i execute the code (you have to set the tenancy, user, fingerprint, etc) i don't have errors, but:
When I go to the root region, all is created (compartment and VCN)
when I go to the second region, the VCN is not created
terraform version: v1.0.2
my VCN module has:
terraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = ">= 1.0.2"
configuration_aliases = [
oci.root,
oci.region1
]
}
}
}
And when i call the VCN module I pass:
module "vcn" {
source = "./modules/vcn"
providers = {
oci.root = oci.home
oci.region1 = oci.region1
}
...
...
And my providers are:
provider "oci" {
alias = "home"
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
region = local.json_data.TERRAFORM_work.region
}
provider "oci" {
alias = "region1"
region = var.region1
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
}
What should i change, to create this VCN inside the two regions or more, at the same time?
using the terraform plan and apply
Thanks so much
Your module module.vcn does not declare any provider. From docs:
each module must declare its own provider requirements,
So you have to add to your module something like:
terraform {
required_providers {
oci = {
source = "source_for-oci"
version = ">= your_version"
}
}
}

Is there a way, inside a terraform script, to retrieve the latest version of a layer?

I have lambdas that reference a layer, this layer is maintained by someone else and when a new
version is released I have to update my terraform code to put the latest version in the arn (here 19).
Is there a way, in the terraform script, to get the latest version and use it?
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
"arn:aws:lambda:eu-central-1:587522145896:layer:my-layer-name:19"
]
}
Thanks.
ps : this means the layer's terraform script is not in mine, it's an other script that I don't have access to.
You can use the aws_lambda_layer_version data source to discover the latest version.
For example:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
data.aws_lambda_layer_version.layer_version.arn
]
}
data "aws_lambda_layer_version" "layer_version" {
layer_name = "my-layer-name"
}

terraform.apply InvalidParameterException: The following supplied instance types do not exist: [m4.large]

I have the below cluster.tf file in my EC2 instance (type: t3.micro):
locals {
cluster_name = "my-eks-cluster"
}
module "vpc" {
source = "git::https://git#github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"
aws_region = "eu-north-1"
az_count = 3
aws_azs = "eu-north-1a, eu-north-1b, eu-north-1c"
global_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
}
module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v16.1.0"
cluster_name = local.cluster_name
cluster_version = "1.17"
vpc_id = module.vpc.aws_vpc_id
subnets = module.vpc.aws_subnet_private_prod_ids
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "t3.micro"
}
}
manage_aws_auth = false
}
But when I'm running the command terraform.apply I get this exception:
Error: error creating EKS Node Group (my-eks-cluster/my-eks-cluster-eks_nodes-divine-pig): InvalidParameterException: The following supplied instance types do not exist: [m4.large]
I tried to google it but couldn't find a solution for it...
I haven't worked previously before with AWS modules but in modules/node_groups on that GitHub repo link it looks like you may need to set node_group_defaults.
Reason why is that the If unset column for the instance type row says that the value in [var.workers_group_defaults[instance_type]] will be used.
That default value is located in the root local.tf and has a value of m4.large so maybe that instance type is not supported in your AWS region?
Not sure of how to fix this completely but may help with starting to troubleshoot.

Resources