retrieve instance id to create cloudwatch alarm - amazon-ec2

I'm trying to create a Cloudwatch alarm using Terraform, to alert when the EC2 disk space exceeds a certain threshold such as 75%.
The below module works fine if I hard-code the instance id. However I would like the alarm to automatically pick up the new instance id, if the instance terminates & launches a new instance via auto-scaling.
Any suggestions much appreciated, thanks.
resource "aws_cloudwatch_metric_alarm" "disk_space_alarm" {
alarm_name = var.disk_space_alarm_name
comparison_operator = var.comparison_operator
evaluation_periods = var.disk_space_alarm_evaluation_periods
metric_name = "disk_used_percent"
namespace = "CWAgent"
period = var.disk_space_alarm_period
statistic = "Average"
threshold = var.disk_space_alarm_threshold
alarm_description = var.disk_space_alarm_description
alarm_actions = [var.notify_alert_arn]
treat_missing_data = var.disk_space_alarm_missing_data
dimensions = {
"path" = "/"
"InstanceId" = "i-01234567890"
"AutoScalingGroupName" = var.autoscaling_group_name
"ImageId" = "var.ami_id"
"InstanceType" = "var.instance_type"
"device" = "nvme0n1p1"
"fstype" = "xfs"
}
}

Maybe you can reference the id with something like:
instanceid = aws_instance.your_instance.id
Another way I can think of is using outputs and reference the output of the id.
resource "aws_instance" "app_server" {
ami = "ami-08d70e59c07c61a3a"
instance_type = "t2.micro"
tags = {
Name = var.instance_name
}
}
output "instance_id" {
description = "ID of the EC2 instance"
value = aws_instance.app_server.id
}
instanceid = output.instance_id
https://learn.hashicorp.com/tutorials/terraform/aws-outputs?in=terraform/aws-get-started

Related

Terraform - passing vars between different resources

I have one resource to create ec2 instance, and another one for creating ebs (with attach inside).
In the ebs I need to give it instance_id = aws_instance.ec2.id which is created in the ec2 resource. How can I pass the var value from one resource to the other one?
I'm using modules with tfvars file to consume bout resources to create ec2 instance with external disk.
ec2 main file:
# NIC
resource "aws_network_interface" "nic" {
subnet_id = "${var.subnet_id}"
private_ips = "${var.ip_address}"
tags = { Name = var.tags }
}
# EC2 Instance
resource "aws_instance" "ec2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
iam_instance_profile = "${var.iam_instance_profile}"
tags = { Name = var.tags }
key_name = "${var.key_name}"
network_interface {
network_interface_id = "${aws_network_interface.nic.id}"
device_index = 0
}
}
External disk main file:
resource "aws_ebs_volume" "external_disk" {
availability_zone = "${var.availability_zone}"
size = "${var.disk_size}"
type = "${var.disk_type}"
tags = { Name = var.tags }
}
resource "aws_volume_attachment" "disk_attach" {
device_name = "${var.device_name}"
volume_id = aws_ebs_volume.external_disk.id
instance_id = aws_instance.ec2.id
}
Main env module:
module "external_disk_red" {
source = "../source-modules/external-disk"
availability_zone = "${var.availability_zone}"
size = "${var.disk_size_red}"
type = "${var.disk_type_red}"
}
module "red" {
source = "../source-modules/ec2"
region = "${var.region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
iam_instance_profile = "${var.iam_instance_profile}"
tags = "${var.tags_red}"
key_name = "${var.key_name}"
ip_address = "${var.ip_address_red}"
subnet_id = "${var.subnet_id}"
device_name = "${var.volume_path_red}"
}
What you would have to do in this situation is add an output to your ec2 module and use it as an input (variable) in your external disk module.
Also, I don't know what version of Terraform you are using, but using double quotes to refer to a variable is considered legacy (unless you want to do interpolation).
So...
source-modules/ec2/output.tf
output "instance_id" {
value = aws_instance.ec2.id
description = "ID of the EC2 instance"
}
source-modules/external-disk/variables.tf
variable "instance_id" {
type = string
description = "ID of the EC2 instance"
}
source-modules/external-disk/main.tf
resource "aws_ebs_volume" "external_disk" {
availability_zone = var.availability_zone
size = var.disk_size
type = var.disk_type
tags = { Name = var.tags }
}
resource "aws_volume_attachment" "disk_attach" {
device_name = var.device_name
volume_id = aws_ebs_volume.external_disk.id
instance_id = var.instance_id
}
Main env module
module "external_disk_red" {
source = "../source-modules/external-disk"
availability_zone = var.availability_zone
size = var.disk_size_red
type = var.disk_type_red
instance_id = module.red.instance_id
}
module "red" {
source = "../source-modules/ec2"
region = var.region
access_key = var.access_key
secret_key = var.secret_key
ami = var.ami
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
tags = var.tags_red
key_name = var.key_name
ip_address = var.ip_address_red
subnet_id = var.subnet_id
device_name = var.volume_path_red
}

Trying to use terraform to create multiple EC2 instances with separate route 53 records

So I have a project where I'm trying to do something simple like create a reusable project that could create the following:
EC2 SG - 1 per workload
EC2 instances - could be 1 or more
Route 53 records - 1 per EC2 instance created
This project works just fine without any issues with using just 1 instance, but when I increase the count to anything besides 1, I get the following:
Error: Invalid index
│
│ on .terraform/modules/ec2_servers_dns_name/main.tf line 18, in resource "aws_route53_record" "this":
│ 18: records = split(",", var.records[count.index])
│ ├────────────────
│ │ count.index is 1
│ │ var.records is list of string with 1 element
│
│ The given key does not identify an element in this collection value: the
│ given index is greater than or equal to the length of the collection.
In my core main.tf, the modules for EC2 and Route53 look like below:
# EC2 Instances
module "ec2_servers" {
ami = data.aws_ami.server.id
associate_public_ip_address = var.ec2_associate_public_ip_address
disable_api_termination = var.ec2_disable_api_termination
ebs_optimized = var.ec2_ebs_optimized
instance_count = var.ec2_servers_instance_count
instance_dns_names = var.ec2_servers_dns_name_for_tags
instance_type = var.ec2_servers_instance_type
key_name = var.ec2_key_name
monitoring = var.ec2_enhanced_monitoring
name = format("ec2-%s-%s-server",local.tags["application"],local.tags["environment"])
rbd_encrypted = var.ec2_rbd_encrypted
rbd_volume_size = var.ec2_rbd_volume_size
rbd_volume_type = var.ec2_rbd_volume_type
subnet_id = concat(data.terraform_remote_state.current-vpc.outputs.app_private_subnets)
user_data = var.ec2_user_data
vpc_security_group_ids = [module.ec2_security_group_servers.this_security_group_id, var.baseline_sg]
tags = local.tags
}
# Create DNS entry for EC2 Instances
module "ec2_servers_dns_name" {
domain = var.domain_name
instance_count = var.ec2_servers_instance_count
name = var.ec2_servers_dns_name
private_zone = "true"
records = module.ec2_servers.private_ip
ttl = var.ttl
type = var.record_type
providers = {
aws = aws.network
}
}
And the resources (EC2/Route53) in our core module repo are shown below:
EC2
locals {
is_t_instance_type = replace(var.instance_type, "/^t[23]{1}\\..*$/", "1") == "1" ? "1" : "0"
}
resource "aws_instance" "this" {
count = var.instance_count
ami = var.ami
associate_public_ip_address = var.associate_public_ip_address
credit_specification {
cpu_credits = local.is_t_instance_type ? var.cpu_credits : null
}
disable_api_termination = var.disable_api_termination
ebs_optimized = var.ebs_optimized
iam_instance_profile = var.iam_instance_profile
instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior
instance_type = var.instance_type
ipv6_addresses = var.ipv6_addresses
ipv6_address_count = var.ipv6_address_count
key_name = var.key_name
lifecycle {
ignore_changes = [private_ip, root_block_device, ebs_block_device, volume_tags, user_data, ami]
}
monitoring = var.monitoring
placement_group = var.placement_group
private_ip = var.private_ip
root_block_device {
encrypted = var.rbd_encrypted
volume_size = var.rbd_volume_size
volume_type = var.rbd_volume_type
}
secondary_private_ips = var.secondary_private_ips
source_dest_check = var.source_dest_check
subnet_id = element(var.subnet_id, count.index)
tags = merge(tomap({"Name"= var.name}), var.tags, var.instance_dns_names[count.index])
tenancy = var.tenancy
user_data = var.user_data[count.index]
volume_tags = var.volume_tags
vpc_security_group_ids = var.vpc_security_group_ids
}
Route53
data "aws_route53_zone" "this" {
name = var.domain
private_zone = var.private_zone
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_route53_record" "this" {
count = var.instance_count
name = var.name[count.index]
records = split(",", var.records[count.index])
type = var.type
ttl = var.ttl
zone_id = data.aws_route53_zone.this.zone_id
}
It seems like it's possibly with the output for the private IP for EC2, but I'm not sure. Here's the output for the private IP for the EC2 resource
output "private_ip" {
description = "The private IP address assigned to the instance."
value = [aws_instance.this[0].private_ip]
}
And the records variable in the R53 resource is set to a list.
Any thoughts on how to pull the private IP for the EC2 instances (whether it's one or multiple) have the output for each private IP be called dynamically in the R53 module so the R53 record can be created without issue?
Try
output "private_ip" {
description = "The private IP address assigned to the instance."
value = [aws_instance.this[*].private_ip]
}
The 0 returns only one element
EDIT: Add this
tostring()

Unable to create EC2 using Terraform. Route Table Association stuck in creating mode

I am trying to create a simple infrastructure which includes EC2, VPC and internet connectivity with Internet Gateway, but while the infrastructure is being created through terraform apply the terminal output gets stuck in creating mode for approximately 5-6 minutes for route table association using subnet id and then finally throws error that vpc-id, routetableid, subnet id does not exist and not found.
Sharing some specific code below :
resource "aws_route_table" "dev-public-crt" {
vpc_id = "aws_vpc.main-vpc.id"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "aws_internet_gateway.dev-igw.id"
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = "aws_subnet.dev-subnet-public-1.id"
route_table_id = "aws_route_table.dev-public-crt.id"
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = "aws_vpc.dev-vpc.id"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
You need to remove the " around all the reference values you have there: vpc_id = "aws_vpc.main-vpc.id" should be vpc_id = aws_vpc.main-vpc.id, etc. Otherwise you try to create a aws_route_table in the vpc with the literal id "aws_vpc.main-vpc.id".
Whenever you want to reference variables or resources or data sources either do not wrap in " at all, or interpolate using "something ${aws_vpc.main-vpc.id} ..."
The result should probably look like:
resource "aws_route_table" "dev-public-crt" {
vpc_id = aws_vpc.main-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.dev-igw.id
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = aws_subnet.dev-subnet-public-1.id
route_table_id = aws_route_table.dev-public-crt.id
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = aws_vpc.dev-vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
No guarantee that this works because now there could be invalid references, but those need to cleaned up by you

create azure vm from custom image using terraform error

I need to provision a VMs in Azure from a Custom Image using Terraform, and everything works fine with image from the market place but when I try to specify a my custom image an error returns. I have been banging my head all day on this issue.
Here my tf script:
resource "azurerm_windows_virtual_machine" "tftest" {
name = "myazurevm"
location = "eastus"
resource_group_name = "myresource-rg"
network_interface_ids = [azurerm_network_interface.azvm1nic.id]
size = "Standard_B1s"
storage_image_reference {
id = "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Compute/images/mytemplate"
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_data_disk {
name = "my-data-disk"
managed_disk_type = "Premium_LRS"
disk_size_gb = 75
create_option = "FromImage"
lun = 0
}
os_profile {
computer_name = "myvmazure"
admin_username = "admin"
admin_password = "test123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
Here the error returned during plan phase:
2020-07-17T20:02:26.9367986Z ==============================================================================
2020-07-17T20:02:26.9368212Z Task : Terraform
2020-07-17T20:02:26.9368456Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2020-07-17T20:02:26.9368678Z Version : 0.0.142
2020-07-17T20:02:26.9368852Z Author : Microsoft Corporation
2020-07-17T20:02:26.9369049Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2020-07-17T20:02:26.9369262Z ==============================================================================
2020-07-17T20:02:27.2826725Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe providers
2020-07-17T20:02:27.5303002Z .
2020-07-17T20:02:27.5304176Z └── provider.azurerm
2020-07-17T20:02:27.5304628Z
2020-07-17T20:02:27.5363313Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe plan
2020-07-17T20:02:29.7685150Z [31m
2020-07-17T20:02:29.7788471Z [1m[31mError: [0m[0m[1mInsufficient os_disk blocks[0m
2020-07-17T20:02:29.7792789Z
2020-07-17T20:02:29.7793007Z [0m on line 0:
2020-07-17T20:02:29.7793199Z (source code not available)
2020-07-17T20:02:29.7793305Z
2020-07-17T20:02:29.7793472Z At least 1 "os_disk" blocks are required.
2020-07-17T20:02:29.7793660Z [0m[0m
2020-07-17T20:02:29.7793800Z [31m
2020-07-17T20:02:29.7793975Z [1m[31mError: [0m[0m[1mMissing required argument[0m
Do you have any suggestions to locate the issue?
I have finally figured out the issue. I was using the wrong terraform resource:
wrong --> azurerm_windows_virtual_machine
correct --> azurerm_virtual_machine
azurerm_windows_virtual_machine doesn't support arguments like (storage_os_disk, storage_data_disk) and is not the right one for custom images unless the image is publish in Shared Image Gallery.
See documentation for options supported from each provider:
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html
https://www.terraform.io/docs/providers/azurerm/r/windows_virtual_machine.html
first do it
https://learn.microsoft.com/pt-br/azure/virtual-machines/windows/upload-generalized-managed?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
than my all cod
resource "azurerm_resource_group" "example" {
name = "example-resources1"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network1"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal1"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "example-machine1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
vm_size = "Standard_B1s"
network_interface_ids = [
azurerm_network_interface.example.id,
]
storage_image_reference {
id = "/subscriptions/XXXXXXXXXXXXX/resourceGroups/ORIGEM/providers/Microsoft.Compute/images/myImage"
//just copi id from your image that you created
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvmazure"
admin_username = "adminusername"
admin_password = "testenovo#123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
//bellow the cod to call powershell o work extension,
resource "azurerm_virtual_machine_extension" "software" {
name = "install-software"
//resource_group_name = azurerm_resource_group.example.name
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
protected_settings = <<SETTINGS
{
"commandToExecute": "powershell -encodedCommand ${textencodebase64(file("install.ps1"), "UTF-16LE")}"
}
SETTINGS
}
You can use a custom image with "azurerm_windows_virtual_machine" module setting "source_image_id" parameter. Documentation note that "One of either source_image_id or source_image_reference must be set." One you can use for marketplace/gallery imagens and the other for managed images.

Terraform - Shared volume on AWS

Trying to attach a shared volume to my auto-scaling group. Not sure how to get that done in terraform, or even if that is possible?
The aws_volume_attachment takes on one instance id, but I am expecting to put this in launch config somehow. Can someone please help.
resource "aws_ebs_volume" "shared_volume" {
availability_zone = "us-east-1"
size = 2
}
resource "aws_volume_attachment" "volume_attachment" {
device_name = "/dev/xvdb"
instance_id = "????"
volume_id = "${aws_ebs_volume.shared_volume.id}"
skip_destroy = true
}
resource "aws_launch_configuration" "flume-conf" {
image_id = "${var.app_ami_id}"
instance_type = "${var.app_instance_type}"
key_name = "${var.ssh_key_name}"
security_groups = ["${var.app_security_group_id}"]
user_data = "${data.template_file.config.rendered}"
iam_instance_profile = "${var.app_iam_role}"
root_block_device {
volume_size = 50
volume_type = "gp2"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "ec2_asg" {
name = "${format("%s", var.app_name)}"
launch_configuration = "${aws_launch_configuration.flume-conf.name}"
min_size = "${var.asg_min_size}"
max_size = "${var.asg_max_size}"
vpc_zone_identifier = ["${element(data.aws_subnet_ids.private.ids, 0)}", "${element(data.aws_subnet_ids.private.ids, 1)}"]
availability_zones = "${var.availability_zones}"
depends_on = []
lifecycle {
create_before_destroy = false
}
}
EBS Volumes cannot be mounted to multiple hosts. If you are looking for this functionality, consider EFS.

Resources