UPDATED - Terraform OCI - create multiple VCN in different regions - oracle

I would like to create 2 VCN and other resources inside two or more regions.
I upload my code inside this github account
When i execute the code (you have to set the tenancy, user, fingerprint, etc) i don't have errors, but:
When I go to the root region, all is created (compartment and VCN)
when I go to the second region, the VCN is not created
terraform version: v1.0.2
my VCN module has:
terraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = ">= 1.0.2"
configuration_aliases = [
oci.root,
oci.region1
]
}
}
}
And when i call the VCN module I pass:
module "vcn" {
source = "./modules/vcn"
providers = {
oci.root = oci.home
oci.region1 = oci.region1
}
...
...
And my providers are:
provider "oci" {
alias = "home"
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
region = local.json_data.TERRAFORM_work.region
}
provider "oci" {
alias = "region1"
region = var.region1
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
}
What should i change, to create this VCN inside the two regions or more, at the same time?
using the terraform plan and apply
Thanks so much

Your module module.vcn does not declare any provider. From docs:
each module must declare its own provider requirements,
So you have to add to your module something like:
terraform {
required_providers {
oci = {
source = "source_for-oci"
version = ">= your_version"
}
}
}

Related

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Terraform plugin crashed when provisioning the Azure Local Network Gateway

I am trying ti provision an azure local network gateway. When I try to terraform apply I get the following error:
module.local_gateway.azurerm_local_network_gateway.local_gw: Creating...
╷
│ Error: Plugin did not respond
│
│ with module.local_gateway.azurerm_local_network_gateway.local_gw,
│ on modules/local-gateway/main.tf line 6, in resource "azurerm_local_network_gateway" "local_gw":
│ 6: resource "azurerm_local_network_gateway" "local_gw" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-azurerm_v3.0.0_x5 plugin:
panic: interface conversion: interface {} is nil, not string
goroutine 104 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.expandLocalNetworkGatewayAddressSpaces(0x14001f87f00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:271 +0x234
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.resourceLocalNetworkGatewayCreateUpdate(0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:160 +0xa5c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:329 +0x170
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001a63ba0, 0x14001f87d80, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:467 +0x8d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x140004fa750, {0x108ae8b78, 0x14001cff880}, 0x14001d12dc0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go:977 +0xe38
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000237880, {0x108ae8c20, 0x14002009e30}, 0x14001c1ee00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go:603 +0x338
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x10864d540, 0x14000237880}, {0x108ae8c20, 0x14002009e30}, 0x14001a51020, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x1400159c2a0, 0x10d0d0f40, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1292 +0xc04
google.golang.org/grpc.(*Server).handleStream(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1617 +0xa34
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x1400156d0e0, 0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:940 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform->provider-azurerm/vendor/google.golang.org/grpc/server.go:938 +0x1f0
Error: The terraform-provider-azurerm_v3.0.0_x5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
And here's my local_gw.tf code:
resource "azurerm_local_network_gateway" "local_gw" {
name = var.azurerm_local_network_gateway_name
location = var.location
resource_group_name = var.rg_name
gateway_address = var.gateway_address
address_space = var.local_gw_address_space # The gateway IP address to connect with
tags = merge(var.common_tags)
}
This is where it is being called as a module in main.tf
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
# Local Gateway
module "local_gateway" {
source = "./modules/local-gateway"
location = local.resource_location
rg_name = var.rg_name
azurerm_local_network_gateway_name = var.azurerm_local_network_gateway_name
gateway_address = var.gateway_address
local_gw_address_space = var.local_gw_address_space
common_tags = merge(
local.common_tags,
{
"Name" = "${local.project}-${var.azurerm_local_network_gateway_name}"
},
)
}
This is my provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
backend "azurerm" {
resource_group_name = "shared-resources"
storage_account_name = "janasvtfstate"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
Can someone help me fix this?
The multiple declarations of module “local_gateway” caused this problem. here is no need to declare the items again in the main TF file. As shown below, simply declaring a module suffices.
module "local_gateway" {
source = "./modules/local_gw/"
}
Variables are defined directly on the code in the updated code snippet below.
Step1:
main tf code as follows
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
module "local_gateway" {
source = "./modules/local_gw/"
}
Module ->local_gw->local_gw tf code as follows
resource "azurerm_local_network_gateway" "local_gw" {
name = "localgatewayswarnademo"
location = "Germany West Central"
resource_group_name = "rg-swarnademonew"
gateway_address = "12.13.14.15"
address_space = ["10.0.0.0/16"]
tags = merge("demo")
}
Provider tf file code as
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
# backend "azurerm" {
# resource_group_name = "shared-resources"
# storage_account_name = "janasvtfstate"
# container_name = "tfstate"
# key = "terraform.tfstate"
# }
}
provider "azurerm" {
features {}
}
Note: Commented on the storage account. If required, create it manually.
Step2: run below commands
terraform plan
plan output
terraform apply -auto-approve
Apply output
Verification from portal.
Output:

enable opensearch though terraform

i am trying to create opensesarch monitor and for the same using a terrforam provider. I am using 0.11.x version of terraform and it throws a error on terraform apply.
The following provider constraints are not met by the currently-installed
provider plugins:
* elasticsearch (any version)
terraform script used.
terraform {
required_providers {
elasticsearch = {
source = "phillbaker/elasticsearch"
version = "~> 1.6.3"
}
}
}
provider "elasticsearch" {
url = "https://search-events-pqrhr4w3u4dzervg41frow4mmy.us-east-1.es.amazonaws.com"
insecure = true
}
resource "elasticsearch_index" "events" {
name = "events"
number_of_shards = 1
number_of_replicas = 1
}

OCI: How to get in terraform OKE prepared images?

all
I want to select automatically the image for the node in kubernetes nodepool when I select Shape, Operating System and Version. For this, I have this datasource
data "oci_core_images" "images" {
#Required
compartment_id = var.cluster_compartment
#Optional
# display_name = var.image_display_name
operating_system = var.cluster_node_image_operating_system
operating_system_version = var.cluster_node_image_operating_system_version
shape = var.cluster_node_shape
state = "AVAILABLE"
# sort_by = var.image_sort_by
# sort_order = var.image_sort_order
}
and I select the image in oci_containerengine_node_poolas
resource "oci_containerengine_node_pool" "node_pool01" {
# ...
node_shape = var.cluster_node_shape
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = data.oci_core_images.images.images[0].id
source_type = "IMAGE"
}
}
But my problem seems to be that not all images are prepared for OKE (with the OKE software install in cloudinit).
So the documentation suggest to use the oci cli command:
oci ce node-pool-options get --node-pool-option-id all
And my question is: How can I do this in data in terraform (recover only OKE ready images)
You can use the oci_containerengine_node_pool_option
data "oci_containerengine_node_pool_option" "test_node_pool_option" {
#Required
node_pool_option_id = oci_containerengine_node_pool_option.test_node_pool_option.id
#Optional
compartment_id = var.compartment_id
}
Ref doc : https://registry.terraform.io/providers/oracle/oci/latest/docs/data-sources/containerengine_node_pool_option
Github issue : https://github.com/oracle-terraform-modules/terraform-oci-oke/issues/263
Change log release details : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/CHANGELOG.adoc#310-april-6-2021

Is there a way, inside a terraform script, to retrieve the latest version of a layer?

I have lambdas that reference a layer, this layer is maintained by someone else and when a new
version is released I have to update my terraform code to put the latest version in the arn (here 19).
Is there a way, in the terraform script, to get the latest version and use it?
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
"arn:aws:lambda:eu-central-1:587522145896:layer:my-layer-name:19"
]
}
Thanks.
ps : this means the layer's terraform script is not in mine, it's an other script that I don't have access to.
You can use the aws_lambda_layer_version data source to discover the latest version.
For example:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
data.aws_lambda_layer_version.layer_version.arn
]
}
data "aws_lambda_layer_version" "layer_version" {
layer_name = "my-layer-name"
}

Resources