I am trying ti provision an azure local network gateway. When I try to terraform apply I get the following error:
module.local_gateway.azurerm_local_network_gateway.local_gw: Creating...
╷
│ Error: Plugin did not respond
│
│ with module.local_gateway.azurerm_local_network_gateway.local_gw,
│ on modules/local-gateway/main.tf line 6, in resource "azurerm_local_network_gateway" "local_gw":
│ 6: resource "azurerm_local_network_gateway" "local_gw" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-azurerm_v3.0.0_x5 plugin:
panic: interface conversion: interface {} is nil, not string
goroutine 104 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.expandLocalNetworkGatewayAddressSpaces(0x14001f87f00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:271 +0x234
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.resourceLocalNetworkGatewayCreateUpdate(0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:160 +0xa5c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:329 +0x170
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001a63ba0, 0x14001f87d80, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:467 +0x8d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x140004fa750, {0x108ae8b78, 0x14001cff880}, 0x14001d12dc0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go:977 +0xe38
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000237880, {0x108ae8c20, 0x14002009e30}, 0x14001c1ee00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go:603 +0x338
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x10864d540, 0x14000237880}, {0x108ae8c20, 0x14002009e30}, 0x14001a51020, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x1400159c2a0, 0x10d0d0f40, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1292 +0xc04
google.golang.org/grpc.(*Server).handleStream(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1617 +0xa34
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x1400156d0e0, 0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:940 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform->provider-azurerm/vendor/google.golang.org/grpc/server.go:938 +0x1f0
Error: The terraform-provider-azurerm_v3.0.0_x5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
And here's my local_gw.tf code:
resource "azurerm_local_network_gateway" "local_gw" {
name = var.azurerm_local_network_gateway_name
location = var.location
resource_group_name = var.rg_name
gateway_address = var.gateway_address
address_space = var.local_gw_address_space # The gateway IP address to connect with
tags = merge(var.common_tags)
}
This is where it is being called as a module in main.tf
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
# Local Gateway
module "local_gateway" {
source = "./modules/local-gateway"
location = local.resource_location
rg_name = var.rg_name
azurerm_local_network_gateway_name = var.azurerm_local_network_gateway_name
gateway_address = var.gateway_address
local_gw_address_space = var.local_gw_address_space
common_tags = merge(
local.common_tags,
{
"Name" = "${local.project}-${var.azurerm_local_network_gateway_name}"
},
)
}
This is my provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
backend "azurerm" {
resource_group_name = "shared-resources"
storage_account_name = "janasvtfstate"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
Can someone help me fix this?
The multiple declarations of module “local_gateway” caused this problem. here is no need to declare the items again in the main TF file. As shown below, simply declaring a module suffices.
module "local_gateway" {
source = "./modules/local_gw/"
}
Variables are defined directly on the code in the updated code snippet below.
Step1:
main tf code as follows
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
module "local_gateway" {
source = "./modules/local_gw/"
}
Module ->local_gw->local_gw tf code as follows
resource "azurerm_local_network_gateway" "local_gw" {
name = "localgatewayswarnademo"
location = "Germany West Central"
resource_group_name = "rg-swarnademonew"
gateway_address = "12.13.14.15"
address_space = ["10.0.0.0/16"]
tags = merge("demo")
}
Provider tf file code as
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
# backend "azurerm" {
# resource_group_name = "shared-resources"
# storage_account_name = "janasvtfstate"
# container_name = "tfstate"
# key = "terraform.tfstate"
# }
}
provider "azurerm" {
features {}
}
Note: Commented on the storage account. If required, create it manually.
Step2: run below commands
terraform plan
plan output
terraform apply -auto-approve
Apply output
Verification from portal.
Output:
Related
I am trying to create azure cyclecloud server using terraform , my code is below
note: remove the actual disk id ,vm id, public key etc
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "FromImage"
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
resource "azurerm_linux_virtual_machine" "res-0" {
admin_username = "cyclecloud"
location = "westus2"
name = "cc3"
network_interface_ids = network_interfaces_id"
resource_group_name = "myrg"
size = "Standard_DS1_v2"
admin_ssh_key {
public_key = "my_public_key"
username = "cyclecloud"
}
boot_diagnostics {
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
plan {
name = "cyclecloud-81"
product = "azure-cyclecloud"
publisher = "azurecyclecloud"
}
source_image_reference {
offer = "azure-cyclecloud"
publisher = "azurecyclecloud"
sku = "cyclecloud-81"
version = "latest"
}
}
while running :
terraform apply ,getting the below error:
Error: expected create_option to be one of [Attach Empty], got FromImage
with azurerm_virtual_machine_data_disk_attachment.res-0,
on main.tf line 3, in resource "azurerm_virtual_machine_data_disk_attachment" "res-0":
3: create_option = "FromImage"
Please assist
note i am using the below provider:
terraform {
backend "local" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.31.0"
}
}
}
provider "azurerm" {
features {}
}
The error clearly says that you are passing an unsupported value to the create_option attribute of resource azurerm_virtual_machine_data_disk_attachment. The possible values are Empty or Attach.
resource "azurerm_virtual_machine_data_disk_attachment" "res-0" {
caching = "None"
create_option = "Empty" ## or ## "Attach" # <- Choose any of these values.
lun = 0
managed_disk_id = "Disk_id"
virtual_machine_id = "vm_id"
depends_on = [
azurerm_linux_virtual_machine.res-0,
]
}
Refer to attribute description:
create_option - (Optional) The Create Option of the Data Disk, such as Empty or Attach. Defaults to Attach. Changing this forces a new resource to be created.
Official Documentation: https://registry.terraform.io/providers/hashicorp/azurerm/3.31.0/docs/resources/virtual_machine_data_disk_attachment#create_option
After this may be network_interface_ids = network_interfaces_id" this might result in an error if no real network_interface_ids are provided as the attribute values.
Refer to official hashicorp example : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine
Not Sure but if the goal is to attach an additional data disk to the machine then azurerm_managed_disk
resource is also required not only just attachment and there you can use create_option = FromImage attribute value.
create_option for the azurerm_managed_disk : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/managed_disk#create_option
I want to provision an ec2 instance with a key and run a script inside an ec2-instance.
filename instance.tf
resource "aws_key_pair" "mykey" {
key_name = "terraform-nverginia"
public_key = "${file ("${var.PATH_TO_PUBLIC_KEY}")}"
}
resource "aws_instance" "demo" {
ami = "${lookup (var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
tags = {
Name = "T-instance"
}
provisioner "file" { // copying file from local to remote server
source = "deployscript.sh"
destination = "/home/ec2-user/deploy.sh" //check if both the file names are same or not.
}
provisioner "remote-exec" { // executing script to do some deployment in the server.
inline = [
"chmod +x /home/ec2-user/deploy.sh",
"sudo /home/ec2-user/deploy.sh"
]
}
connection {
type = "ssh" // To connect to the instance
user = "${var.INSTANCE_USERNAME}"
host = "122.171.19.4" // My personal laptop's ip address
private_key = "${file ("${var.PATH_TO_PRIVATE_KEY}")}"
}
} // end of resource aws_instance
//-------------------------------------------------
filename: provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
}
filename vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION"{
default = "us-east-1"
}
variable "AMIS" {
type = map
default = {
us-east-1 = "ami-0574da719dca65348"
us-east-2 = "ami-0a606d8395a538502"
}
}
variable "PATH_TO_PRIVATE_KEY" {
default = "terraform-nverginia"
}
variable "PATH_TO_PUBLIC_KEY"{
default = "mykey.pub"
}
variable "INSTANCE_USERNAME"{
default = "ec2-user"
}
filename = terraform.tfvars
AWS_ACCESS_KEY = "<Access key>"
AWS_SECRET_KEY = "<Secret key>"
Error:
PS D:\\Rajiv\\DevOps-Practice\\Terraform\\demo-2\> terraform plan
╷
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
│ Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: G
etCallerIdentity, https response error StatusCode: 403, RequestID: 594b6dab-e087-4678-8c57-63a65c3d3d41, api error InvalidClientTokenId: The se
curity token included in the request is invalid.
│
│ with provider\["registry.terraform.io/hashicorp/aws"\],
│ on \<empty\> line 0:
│ (source code not available)
I am expecting a ec2 instance to be created and the script should be run.
Providers are plugins which helps Terraform to interact with specific cloud services. You must declare and install cloud provider before you want to use the cloud service via Terraform. Refer to this link https://developer.hashicorp.com/terraform/language/providers. In your code try adding AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.48.0"
}
}
}
provider "aws" {
# Configuration options
}
Then run terraform init to install the provider.
So I'm new to Terraform and trying to learn it, with some great difficultly. I run into a timeout issue on running certain things into Kubernetes that is hosted locally.
Setup
Running on Windows 10
Running Docker for Windows with Kubernetes cluster enabled
Running WSL2 Ununtu 20 on Windows
Installed Terraform and able to access kubectl affecting the cluster
Coding
I'm following the coding example set up from this website:
https://nickjanetakis.com/blog/configuring-a-kind-cluster-with-nginx-ingress-using-terraform-and-helm
But with a modification, in that in the demo.sh I'm running the steps manually and the kubectl file they are reference, I've turned that into a Terraform file as this is how I would do deployments in the future. Also I've had to comment out the provisioner "local-exec" as the kubectl commands outright fails in Terraform.
Code file
resource "kubernetes_pod" "foo-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "foo-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "foo"
}
}
spec {
container {
name = "foo-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=foo"]
}
}
}
resource "kubernetes_pod" "bar-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "bar-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "bar"
}
}
spec {
container {
name = "bar-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=bar"]
}
}
}
resource "kubernetes_service" "foo-service" {
depends_on = [kubernetes_pod.foo-app]
metadata {
name = "foo-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "foo"
}
port {
port = 5678
}
}
}
resource "kubernetes_service" "bar-service" {
depends_on = [kubernetes_pod.bar-app]
metadata {
name = "bar-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "bar"
}
port {
port = 5678
}
}
}
resource "kubernetes_ingress" "example-ingress" {
depends_on = [kubernetes_service.foo-service, kubernetes_service.bar-service]
metadata {
name = "example-ingress"
namespace = var.ingress_nginx_namespace
}
spec {
rule {
host = "172.21.220.84"
http {
path {
path = "/foo"
backend {
service_name = "foo-service"
service_port = 5678
}
}
path {
path = "/var"
backend {
service_name = "bar-service"
service_port = 5678
}
}
}
}
}
}
The problem
I run into 2 problems, first the pods cannot fine the name space, through it has been built, kubectl shows it as a valid namespace as well.
But the main problem I have is time outs. This happens on different elements all together, depending on the example. In trying to deploy the pods to the local cluster I get a 5 minute time out with this as the output after 5 minutes:
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.foo-app,
│ on services.tf line 1, in resource "kubernetes_pod" "foo-app":
│ 1: resource "kubernetes_pod" "foo-app" {
│
╵
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.bar-app,
│ on services.tf line 18, in resource "kubernetes_pod" "bar-app":
│ 18: resource "kubernetes_pod" "bar-app" {
This will happen for several kinds of things, I have this problem with pods, deployments, and ingresses. This is very frustrating and I would like to know is there a particular setting I'm needing to do or am I doing something wrong with my set up?
Thanks!
Edit #1:
So I repeated this on an Ubuntu VM with MiniKube install getting the same behavior. I copied the scripts, got Terraformed installed, Minikube installed confirmed its all up and running, yet I'm getting the same behavior on there as well. I'm wondering if this is an issue with Kubernetes and Terraform?
I would like to create 2 VCN and other resources inside two or more regions.
I upload my code inside this github account
When i execute the code (you have to set the tenancy, user, fingerprint, etc) i don't have errors, but:
When I go to the root region, all is created (compartment and VCN)
when I go to the second region, the VCN is not created
terraform version: v1.0.2
my VCN module has:
terraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = ">= 1.0.2"
configuration_aliases = [
oci.root,
oci.region1
]
}
}
}
And when i call the VCN module I pass:
module "vcn" {
source = "./modules/vcn"
providers = {
oci.root = oci.home
oci.region1 = oci.region1
}
...
...
And my providers are:
provider "oci" {
alias = "home"
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
region = local.json_data.TERRAFORM_work.region
}
provider "oci" {
alias = "region1"
region = var.region1
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
}
What should i change, to create this VCN inside the two regions or more, at the same time?
using the terraform plan and apply
Thanks so much
Your module module.vcn does not declare any provider. From docs:
each module must declare its own provider requirements,
So you have to add to your module something like:
terraform {
required_providers {
oci = {
source = "source_for-oci"
version = ">= your_version"
}
}
}
This time a have a problem, that i don't know how is the best way to resolve:
I need create a policy for a user group that i have to create too.
this is part of my code:
provider.tf
provider "oci" {
tenancy_ocid = local.json_data.TERRAFORM.tenancy_ocid
user_ocid = local.json_data.TERRAFORM.user_ocid
private_key_path = local.json_data.TERRAFORM.private_key_path
fingerprint = local.json_data.TERRAFORM.fingerprint
region = local.json_data.TERRAFORM.region
}
data "oci_identity_compartments" "compartment_id" {
#Required
compartment_id = local.json_data.COMPARTMENT.compartment_ocid
//compartment_id = local.json_data.TERRAFORM.tenancy_ocid
}
data "oci_identity_tenancy" "test_tenancy" {
#Required
tenancy_id = local.json_data.TERRAFORM.tenancy_ocid
}
data "oci_identity_region_subscriptions" "test_region_subscriptions" {
#Required
tenancy_id = local.json_data.TERRAFORM.tenancy_ocid
}
// password: $KV3PeNx&f5QJD0OBJK&
resource "oci_identity_user" "create_user_Traininguser1" {
#Required
//compartment_id = data.oci_identity_compartments.compartment_id.id
compartment_id = local.json_data.TERRAFORM.tenancy_ocid
description = local.json_data.USER_GROUP.user_description
name = local.json_data.USER_GROUP.user_name
}
resource "oci_identity_group" "create_group_Traininggroup" {
#Required
compartment_id = local.json_data.TERRAFORM.tenancy_ocid
description = local.json_data.USER_GROUP.group_description
name = local.json_data.USER_GROUP.group_name
}
resource "oci_identity_user_group_membership" "add_user_group_membership" {
#Required
group_id = oci_identity_group.create_group_Traininggroup.id
user_id = oci_identity_user.create_user_Traininguser1.id
}
resource "oci_identity_policy" "test_policy" {
#Required
compartment_id = local.json_data.TERRAFORM.tenancy_ocid
description = local.json_data.POLICY.policy_description
name = local.json_data.POLICY.policy_name
statements = local.json_data.POLICY.policy_statements
}
variable.tf
locals {
json_data = jsondecode(file("${path.module}/init_values.json"))
}
init_values.json
{
"TERRAFORM": {
"tenancy_ocid": "ocid1.tenancy.ocxxxxxxxxxxxxx",
"user_ocid": "ocid1.user.oc1.xxxxxxxxxxxxxxx",
"private_key_path": "/Users/name/.oci/oci_api_key.pem",
"fingerprint": "XX:X0:X2:5X:c0:32:XX:07:3f:7e:XX:af:XX:3f:31:93",
"region": "eu-frankfurt-1",
"new_compartment": "new_compartment"
},
"COMPARTMENT": {
"compartment_ocid": "ocid1.compartment.oc1.Xxxxxxxxxxxxxxx"
},
"USER_GROUP": {
"user_description": "usuario de prueba",
"user_name": "Traininguser1",
"group_description": "grupo de prueba",
"group_name": "Traininggroup"
},
"POLICY": {
"policy_name": "TrainingPolicy",
"policy_description": "TrainingDescription",
"policy_statements": ["Allow group Traininggroup to manage virtual-network-family in Tenancy", "Allow group Traininggroup to manage instance-family in Tenancy"]
}
}
Error:
│ Error: 400-InvalidParameter
│ Provider version: 4.28.0, released on 2021-05-26. This provider is 8 update(s) behind to current.
│ Service: Identity Policy
│ Error Message: The group Traininggroup specified in the policy statement does not exist under current compartment hierarchy.
│
│ OPC request ID: 897be7b9cd1dfccdbf34826dca571765/69DB175ED2CA61834FB1EBE77EC362BA/8A9735EF7EACF883EDE87413C40FBD45
│ Suggestion: Please update the parameter(s) in the Terraform config as per error message The group Traininggroup specified in the policy statement does not exist under current compartment hierarchy.
│
│
│
│ with oci_identity_policy.test_policy,
│ on provider.tf line 70, in resource "oci_identity_policy" "test_policy":
│ 70: resource "oci_identity_policy" "test_policy" {
│
╵
I don't want create separate script for this part, it means:
first execute a script to create a user group
second execute another script to create policy, etc
And for example if I want create a compartment that will contain users, users group, policies etc.
So how is the best way to do it at once?
Can somebody help me?
Regards
Implicit dependencies are the primary way that Terraform understands the relationships between your resources. Sometimes there are dependencies between resources that are not visible to Terraform.
Use the depends_on property. More instructions here (learn.hashicorp.com/tutorials/terraform/dependencies).
It allows to create the dependency between "oci_identity_policy" "test_policy" and "oci_identity_group" "create_group_Traininggroup".
Terraform is trying to create the policy before it creates the group.
You should add a depends_on property in the resource "test-policy" to define clearly this dependence, like this:
resource "oci_identity_policy" "test_policy" {
depends_on = [oci_identity_group.create_group_Traininggroup]
#Required
compartment_id = local.json_data.TERRAFORM.tenancy_ocid
description = local.json_data.POLICY.policy_description
name = local.json_data.POLICY.policy_name
statements = local.json_data.POLICY.policy_statements
}