error while spinning ec2 instance with key using Terraform script - amazon-ec2

I want to provision an ec2 instance with a key and run a script inside an ec2-instance.
filename instance.tf
resource "aws_key_pair" "mykey" {
key_name = "terraform-nverginia"
public_key = "${file ("${var.PATH_TO_PUBLIC_KEY}")}"
}
resource "aws_instance" "demo" {
ami = "${lookup (var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
tags = {
Name = "T-instance"
}
provisioner "file" { // copying file from local to remote server
source = "deployscript.sh"
destination = "/home/ec2-user/deploy.sh" //check if both the file names are same or not.
}
provisioner "remote-exec" { // executing script to do some deployment in the server.
inline = [
"chmod +x /home/ec2-user/deploy.sh",
"sudo /home/ec2-user/deploy.sh"
]
}
connection {
type = "ssh" // To connect to the instance
user = "${var.INSTANCE_USERNAME}"
host = "122.171.19.4" // My personal laptop's ip address
private_key = "${file ("${var.PATH_TO_PRIVATE_KEY}")}"
}
} // end of resource aws_instance
//-------------------------------------------------
filename: provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
}
filename vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION"{
default = "us-east-1"
}
variable "AMIS" {
type = map
default = {
us-east-1 = "ami-0574da719dca65348"
us-east-2 = "ami-0a606d8395a538502"
}
}
variable "PATH_TO_PRIVATE_KEY" {
default = "terraform-nverginia"
}
variable "PATH_TO_PUBLIC_KEY"{
default = "mykey.pub"
}
variable "INSTANCE_USERNAME"{
default = "ec2-user"
}
filename = terraform.tfvars
AWS_ACCESS_KEY = "<Access key>"
AWS_SECRET_KEY = "<Secret key>"
Error:
PS D:\\Rajiv\\DevOps-Practice\\Terraform\\demo-2\> terraform plan
╷
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
│ Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: G
etCallerIdentity, https response error StatusCode: 403, RequestID: 594b6dab-e087-4678-8c57-63a65c3d3d41, api error InvalidClientTokenId: The se
curity token included in the request is invalid.
│
│ with provider\["registry.terraform.io/hashicorp/aws"\],
│ on \<empty\> line 0:
│ (source code not available)
I am expecting a ec2 instance to be created and the script should be run.

Providers are plugins which helps Terraform to interact with specific cloud services. You must declare and install cloud provider before you want to use the cloud service via Terraform. Refer to this link https://developer.hashicorp.com/terraform/language/providers. In your code try adding AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.48.0"
}
}
}
provider "aws" {
# Configuration options
}
Then run terraform init to install the provider.

Related

Error in creating EC2 resource using Terraform

I'm trying to create an EC2 using terraform (I'm new to the area). I'm following the tutorial, but I think there's something wrong with the user I created in AWS.
Steps I followed:
Create user in AWS
a) I added to a group that has the AmazonEC2FullAccess policy
b) I created the credentials to use the AWS Cli
I used the file suggested by the Terraform tutorial
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
I ran the aws configure command and put the key and secret key values.
I ran terraform init and it worked
When I run the terraform plan, the error appears.
Error: configuring Terraform AWS Provider: error validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: xxxxxxxxxxxxxxxx, api error InvalidClientTokenId: The security token included in the request is invalid.
Any idea?
I missed the parameter "profile" in main.tf. So, the new file is
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
profile = "default"
}
resource "aws_instance" "app_server" {
ami = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
Its works now!

Invoke Lambda with terraform - permissions

I want to invoke an AWS function that I am creating using Terraform (both the deployment and the invocation).
The Terraform is assuming role in another account.
provider.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.33.0"
}
}
}
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::123456789101:role/AssumedRole"
session_name = "TF"
}
}
The invocation happening using TF Data:
data "aws_lambda_invocation" "start-execution" {
function_name = aws_lambda_function.start-execution-lambda.function_name
depends_on = [aws_lambda_function.start-execution-lambda]
input = <<JSON
{
"key1": "value1",
"key2": "value2"
}
JSON
}
and the assumed role has lambda:* permission.
unfortunately, there is a permission that I am missing, because when I am deploying the TF I get
╷
│ Error: AccessDeniedException:
│ status code: 403, request id:
│
│ with data.aws_lambda_invocation.start-execution,
now when I grant administrator access to the assumed role I can invoke the lambda, it seems that there is another service (not lambda) that Terraform is using to invoke a Lambda function.
I had a lambda:* permission on my lambda resource
Action:
- lambda:*
Resource:
- arn:aws:lambda:Region:AccountID:function:My-function
when the invoke action makes an action on the lambda version and need a qualified permisssion:
Action:
- lambda:*
Resource:
- arn:aws:lambda:Region:AccountID:function:My-function:*

Terraform plugin crashed when provisioning the Azure Local Network Gateway

I am trying ti provision an azure local network gateway. When I try to terraform apply I get the following error:
module.local_gateway.azurerm_local_network_gateway.local_gw: Creating...
╷
│ Error: Plugin did not respond
│
│ with module.local_gateway.azurerm_local_network_gateway.local_gw,
│ on modules/local-gateway/main.tf line 6, in resource "azurerm_local_network_gateway" "local_gw":
│ 6: resource "azurerm_local_network_gateway" "local_gw" {
│
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Stack trace from the terraform-provider-azurerm_v3.0.0_x5 plugin:
panic: interface conversion: interface {} is nil, not string
goroutine 104 [running]:
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.expandLocalNetworkGatewayAddressSpaces(0x14001f87f00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:271 +0x234
github.com/hashicorp/terraform-provider-azurerm/internal/services/network.resourceLocalNetworkGatewayCreateUpdate(0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/internal/services/network/local_network_gateway_resource.go:160 +0xa5c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001f87f00, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:329 +0x170
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000dc6ee0, {0x108ae8b78, 0x14001cff880}, 0x14001a63ba0, 0x14001f87d80, {0x1081089a0, 0x14001f8dc00})
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go:467 +0x8d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x140004fa750, {0x108ae8b78, 0x14001cff880}, 0x14001d12dc0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go:977 +0xe38
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000237880, {0x108ae8c20, 0x14002009e30}, 0x14001c1ee00)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server/server.go:603 +0x338
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x10864d540, 0x14000237880}, {0x108ae8c20, 0x14002009e30}, 0x14001a51020, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x1400159c2a0, 0x10d0d0f40, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1292 +0xc04
google.golang.org/grpc.(*Server).handleStream(0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680, 0x0)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1617 +0xa34
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x1400156d0e0, 0x140002a6fc0, {0x108b4df08, 0x14000448d80}, 0x14001a77680)
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:940 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
/opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/hashicorp/terraform->provider-azurerm/vendor/google.golang.org/grpc/server.go:938 +0x1f0
Error: The terraform-provider-azurerm_v3.0.0_x5 plugin crashed!
This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.
And here's my local_gw.tf code:
resource "azurerm_local_network_gateway" "local_gw" {
name = var.azurerm_local_network_gateway_name
location = var.location
resource_group_name = var.rg_name
gateway_address = var.gateway_address
address_space = var.local_gw_address_space # The gateway IP address to connect with
tags = merge(var.common_tags)
}
This is where it is being called as a module in main.tf
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
# Local Gateway
module "local_gateway" {
source = "./modules/local-gateway"
location = local.resource_location
rg_name = var.rg_name
azurerm_local_network_gateway_name = var.azurerm_local_network_gateway_name
gateway_address = var.gateway_address
local_gw_address_space = var.local_gw_address_space
common_tags = merge(
local.common_tags,
{
"Name" = "${local.project}-${var.azurerm_local_network_gateway_name}"
},
)
}
This is my provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
backend "azurerm" {
resource_group_name = "shared-resources"
storage_account_name = "janasvtfstate"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
Can someone help me fix this?
The multiple declarations of module “local_gateway” caused this problem. here is no need to declare the items again in the main TF file. As shown below, simply declaring a module suffices.
module "local_gateway" {
source = "./modules/local_gw/"
}
Variables are defined directly on the code in the updated code snippet below.
Step1:
main tf code as follows
locals {
azurerm_local_network_gateway_name = "local-gw"
gateway_address = ""
local_gw_address_space = [""]
common_tags = {
"environment" = "test"
"managedby" = "devops"
"developedby" = "jananath"
}
project = "mysvg"
resource_location = "Germany West Central"
}
module "local_gateway" {
source = "./modules/local_gw/"
}
Module ->local_gw->local_gw tf code as follows
resource "azurerm_local_network_gateway" "local_gw" {
name = "localgatewayswarnademo"
location = "Germany West Central"
resource_group_name = "rg-swarnademonew"
gateway_address = "12.13.14.15"
address_space = ["10.0.0.0/16"]
tags = merge("demo")
}
Provider tf file code as
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
# backend "azurerm" {
# resource_group_name = "shared-resources"
# storage_account_name = "janasvtfstate"
# container_name = "tfstate"
# key = "terraform.tfstate"
# }
}
provider "azurerm" {
features {}
}
Note: Commented on the storage account. If required, create it manually.
Step2: run below commands
terraform plan
plan output
terraform apply -auto-approve
Apply output
Verification from portal.
Output:

Terraform timeout issue on local system

So I'm new to Terraform and trying to learn it, with some great difficultly. I run into a timeout issue on running certain things into Kubernetes that is hosted locally.
Setup
Running on Windows 10
Running Docker for Windows with Kubernetes cluster enabled
Running WSL2 Ununtu 20 on Windows
Installed Terraform and able to access kubectl affecting the cluster
Coding
I'm following the coding example set up from this website:
https://nickjanetakis.com/blog/configuring-a-kind-cluster-with-nginx-ingress-using-terraform-and-helm
But with a modification, in that in the demo.sh I'm running the steps manually and the kubectl file they are reference, I've turned that into a Terraform file as this is how I would do deployments in the future. Also I've had to comment out the provisioner "local-exec" as the kubectl commands outright fails in Terraform.
Code file
resource "kubernetes_pod" "foo-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "foo-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "foo"
}
}
spec {
container {
name = "foo-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=foo"]
}
}
}
resource "kubernetes_pod" "bar-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "bar-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "bar"
}
}
spec {
container {
name = "bar-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=bar"]
}
}
}
resource "kubernetes_service" "foo-service" {
depends_on = [kubernetes_pod.foo-app]
metadata {
name = "foo-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "foo"
}
port {
port = 5678
}
}
}
resource "kubernetes_service" "bar-service" {
depends_on = [kubernetes_pod.bar-app]
metadata {
name = "bar-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "bar"
}
port {
port = 5678
}
}
}
resource "kubernetes_ingress" "example-ingress" {
depends_on = [kubernetes_service.foo-service, kubernetes_service.bar-service]
metadata {
name = "example-ingress"
namespace = var.ingress_nginx_namespace
}
spec {
rule {
host = "172.21.220.84"
http {
path {
path = "/foo"
backend {
service_name = "foo-service"
service_port = 5678
}
}
path {
path = "/var"
backend {
service_name = "bar-service"
service_port = 5678
}
}
}
}
}
}
The problem
I run into 2 problems, first the pods cannot fine the name space, through it has been built, kubectl shows it as a valid namespace as well.
But the main problem I have is time outs. This happens on different elements all together, depending on the example. In trying to deploy the pods to the local cluster I get a 5 minute time out with this as the output after 5 minutes:
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.foo-app,
│ on services.tf line 1, in resource "kubernetes_pod" "foo-app":
│ 1: resource "kubernetes_pod" "foo-app" {
│
╵
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.bar-app,
│ on services.tf line 18, in resource "kubernetes_pod" "bar-app":
│ 18: resource "kubernetes_pod" "bar-app" {
This will happen for several kinds of things, I have this problem with pods, deployments, and ingresses. This is very frustrating and I would like to know is there a particular setting I'm needing to do or am I doing something wrong with my set up?
Thanks!
Edit #1:
So I repeated this on an Ubuntu VM with MiniKube install getting the same behavior. I copied the scripts, got Terraformed installed, Minikube installed confirmed its all up and running, yet I'm getting the same behavior on there as well. I'm wondering if this is an issue with Kubernetes and Terraform?

Decrypting Windows Password in terraform

I'm trying to set up a Terraform script to deploy a windows server. When running terraform apply I get an error message referencing below
Error: Invalid reference
on main.tf line 44, in resource "aws_instance" "server":
44: password = "${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}"
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
AFAIK the resource is "aws_instance", the name is "server[0]" while the attribute is the "password_data". I know I'm missing something but don't know what. any assistance would be appreciated.
The full resource module is below in case there is something I'm missing contained in there.
Thanks
resource "aws_instance" "server" {
ami = var.AMIS[var.AWS_REGION]
instance_type = var.AWS_INSTANCE
vpc_security_group_ids = [module.networking.security_group_id_out]
subnet_id = module.networking.subnet_id_out
## Use this count key to determine how many servers you want to create.
count = 1
key_name = var.KEY_NAME
tags = {
# Name = "Server-Cloud"
Name = "Server-${count.index}"
}
root_block_device {
volume_size = var.VOLUME_SIZE
volume_type = var.VOLUME_TYPE
delete_on_termination = true
}
get_password_data = true
provisioner "remote-exec" {
connection {
host = coalesce(self.public_ip, self.private_ip)
type = "winrm"
## Need to provide your own .pem key that can be created in AWS or on your machine for each provisioned EC2.
password = ${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}
}
inline = [
"powershell -ExecutionPolicy Unrestricted C:\\Users\\Administrator\\Desktop\\installserver.ps1 -Schedule",
]
}
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ../public_ips.txt"
}
}
Use password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
without user = "admin" as below :
resource "aws_instance" "windows_server" {
get_password_data = "true"
connection {
host = "${self.public_ip}"
type = "winrm"
https = false
password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
agent = false
insecure = "true"
}
}

Resources