enable opensearch though terraform - elasticsearch

i am trying to create opensesarch monitor and for the same using a terrforam provider. I am using 0.11.x version of terraform and it throws a error on terraform apply.
The following provider constraints are not met by the currently-installed
provider plugins:
* elasticsearch (any version)
terraform script used.
terraform {
required_providers {
elasticsearch = {
source = "phillbaker/elasticsearch"
version = "~> 1.6.3"
}
}
}
provider "elasticsearch" {
url = "https://search-events-pqrhr4w3u4dzervg41frow4mmy.us-east-1.es.amazonaws.com"
insecure = true
}
resource "elasticsearch_index" "events" {
name = "events"
number_of_shards = 1
number_of_replicas = 1
}

Related

Error in creating EC2 resource using Terraform

I'm trying to create an EC2 using terraform (I'm new to the area). I'm following the tutorial, but I think there's something wrong with the user I created in AWS.
Steps I followed:
Create user in AWS
a) I added to a group that has the AmazonEC2FullAccess policy
b) I created the credentials to use the AWS Cli
I used the file suggested by the Terraform tutorial
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
I ran the aws configure command and put the key and secret key values.
I ran terraform init and it worked
When I run the terraform plan, the error appears.
Error: configuring Terraform AWS Provider: error validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: xxxxxxxxxxxxxxxx, api error InvalidClientTokenId: The security token included in the request is invalid.
Any idea?
I missed the parameter "profile" in main.tf. So, the new file is
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
profile = "default"
}
resource "aws_instance" "app_server" {
ami = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
Its works now!

error while spinning ec2 instance with key using Terraform script

I want to provision an ec2 instance with a key and run a script inside an ec2-instance.
filename instance.tf
resource "aws_key_pair" "mykey" {
key_name = "terraform-nverginia"
public_key = "${file ("${var.PATH_TO_PUBLIC_KEY}")}"
}
resource "aws_instance" "demo" {
ami = "${lookup (var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
tags = {
Name = "T-instance"
}
provisioner "file" { // copying file from local to remote server
source = "deployscript.sh"
destination = "/home/ec2-user/deploy.sh" //check if both the file names are same or not.
}
provisioner "remote-exec" { // executing script to do some deployment in the server.
inline = [
"chmod +x /home/ec2-user/deploy.sh",
"sudo /home/ec2-user/deploy.sh"
]
}
connection {
type = "ssh" // To connect to the instance
user = "${var.INSTANCE_USERNAME}"
host = "122.171.19.4" // My personal laptop's ip address
private_key = "${file ("${var.PATH_TO_PRIVATE_KEY}")}"
}
} // end of resource aws_instance
//-------------------------------------------------
filename: provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
}
filename vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION"{
default = "us-east-1"
}
variable "AMIS" {
type = map
default = {
us-east-1 = "ami-0574da719dca65348"
us-east-2 = "ami-0a606d8395a538502"
}
}
variable "PATH_TO_PRIVATE_KEY" {
default = "terraform-nverginia"
}
variable "PATH_TO_PUBLIC_KEY"{
default = "mykey.pub"
}
variable "INSTANCE_USERNAME"{
default = "ec2-user"
}
filename = terraform.tfvars
AWS_ACCESS_KEY = "<Access key>"
AWS_SECRET_KEY = "<Secret key>"
Error:
PS D:\\Rajiv\\DevOps-Practice\\Terraform\\demo-2\> terraform plan
╷
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
│ Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: G
etCallerIdentity, https response error StatusCode: 403, RequestID: 594b6dab-e087-4678-8c57-63a65c3d3d41, api error InvalidClientTokenId: The se
curity token included in the request is invalid.
│
│ with provider\["registry.terraform.io/hashicorp/aws"\],
│ on \<empty\> line 0:
│ (source code not available)
I am expecting a ec2 instance to be created and the script should be run.
Providers are plugins which helps Terraform to interact with specific cloud services. You must declare and install cloud provider before you want to use the cloud service via Terraform. Refer to this link https://developer.hashicorp.com/terraform/language/providers. In your code try adding AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.48.0"
}
}
}
provider "aws" {
# Configuration options
}
Then run terraform init to install the provider.

How to get newly created instance id using Terraform

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?
Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Terraform timeout issue on local system

So I'm new to Terraform and trying to learn it, with some great difficultly. I run into a timeout issue on running certain things into Kubernetes that is hosted locally.
Setup
Running on Windows 10
Running Docker for Windows with Kubernetes cluster enabled
Running WSL2 Ununtu 20 on Windows
Installed Terraform and able to access kubectl affecting the cluster
Coding
I'm following the coding example set up from this website:
https://nickjanetakis.com/blog/configuring-a-kind-cluster-with-nginx-ingress-using-terraform-and-helm
But with a modification, in that in the demo.sh I'm running the steps manually and the kubectl file they are reference, I've turned that into a Terraform file as this is how I would do deployments in the future. Also I've had to comment out the provisioner "local-exec" as the kubectl commands outright fails in Terraform.
Code file
resource "kubernetes_pod" "foo-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "foo-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "foo"
}
}
spec {
container {
name = "foo-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=foo"]
}
}
}
resource "kubernetes_pod" "bar-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "bar-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "bar"
}
}
spec {
container {
name = "bar-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=bar"]
}
}
}
resource "kubernetes_service" "foo-service" {
depends_on = [kubernetes_pod.foo-app]
metadata {
name = "foo-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "foo"
}
port {
port = 5678
}
}
}
resource "kubernetes_service" "bar-service" {
depends_on = [kubernetes_pod.bar-app]
metadata {
name = "bar-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "bar"
}
port {
port = 5678
}
}
}
resource "kubernetes_ingress" "example-ingress" {
depends_on = [kubernetes_service.foo-service, kubernetes_service.bar-service]
metadata {
name = "example-ingress"
namespace = var.ingress_nginx_namespace
}
spec {
rule {
host = "172.21.220.84"
http {
path {
path = "/foo"
backend {
service_name = "foo-service"
service_port = 5678
}
}
path {
path = "/var"
backend {
service_name = "bar-service"
service_port = 5678
}
}
}
}
}
}
The problem
I run into 2 problems, first the pods cannot fine the name space, through it has been built, kubectl shows it as a valid namespace as well.
But the main problem I have is time outs. This happens on different elements all together, depending on the example. In trying to deploy the pods to the local cluster I get a 5 minute time out with this as the output after 5 minutes:
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.foo-app,
│ on services.tf line 1, in resource "kubernetes_pod" "foo-app":
│ 1: resource "kubernetes_pod" "foo-app" {
│
╵
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.bar-app,
│ on services.tf line 18, in resource "kubernetes_pod" "bar-app":
│ 18: resource "kubernetes_pod" "bar-app" {
This will happen for several kinds of things, I have this problem with pods, deployments, and ingresses. This is very frustrating and I would like to know is there a particular setting I'm needing to do or am I doing something wrong with my set up?
Thanks!
Edit #1:
So I repeated this on an Ubuntu VM with MiniKube install getting the same behavior. I copied the scripts, got Terraformed installed, Minikube installed confirmed its all up and running, yet I'm getting the same behavior on there as well. I'm wondering if this is an issue with Kubernetes and Terraform?

UPDATED - Terraform OCI - create multiple VCN in different regions

I would like to create 2 VCN and other resources inside two or more regions.
I upload my code inside this github account
When i execute the code (you have to set the tenancy, user, fingerprint, etc) i don't have errors, but:
When I go to the root region, all is created (compartment and VCN)
when I go to the second region, the VCN is not created
terraform version: v1.0.2
my VCN module has:
terraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = ">= 1.0.2"
configuration_aliases = [
oci.root,
oci.region1
]
}
}
}
And when i call the VCN module I pass:
module "vcn" {
source = "./modules/vcn"
providers = {
oci.root = oci.home
oci.region1 = oci.region1
}
...
...
And my providers are:
provider "oci" {
alias = "home"
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
region = local.json_data.TERRAFORM_work.region
}
provider "oci" {
alias = "region1"
region = var.region1
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
}
What should i change, to create this VCN inside the two regions or more, at the same time?
using the terraform plan and apply
Thanks so much
Your module module.vcn does not declare any provider. From docs:
each module must declare its own provider requirements,
So you have to add to your module something like:
terraform {
required_providers {
oci = {
source = "source_for-oci"
version = ">= your_version"
}
}
}

Resources