How to get newly created instance id using Terraform - amazon-ec2

I am creating AWS ec2 instance(s) using auto scaling group and launch template. I would like to get instance ids of the newly launched instances. Is this possible?
For brevity purpose I have removed some code
resource "aws_launch_template" "service_launch_template" {
name_prefix = "${var.name_prefix}-lt"
image_id = var.ami_image_id
iam_instance_profile {
name = var.instance_profile
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lb_target_group" "service_target_group" {
name = "${var.name_prefix}-tg"
target_type = "instance"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "service_autoscaling_group" {
name = "${var.name_prefix}-asg"
max_size = var.max_instances
min_size = var.min_instances
desired_capacity = var.desired_instances
target_group_arns = [aws_lb_target_group.service_target_group.arn]
health_check_type = "ELB"
launch_template {
id = aws_launch_template.service_launch_template.id
version = aws_launch_template.service_launch_template.latest_version
}
depends_on = [aws_alb_listener.service_frontend_https]
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb" "service_frontend" {
name = "${var.name_prefix}-alb"
load_balancer_type = "application"
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "service_frontend_https" {
load_balancer_arn = aws_alb.service_frontend.arn
protocol = "HTTPS"
port = "443"
}
This is working. But I would like to output the instance ids of the newly launched instances. From terraform documentation looks like the aws_launch_template or aws_autoscaling_group does not export the instance ids. What are my options here?

Terraform is probably completing, and exiting, before the auto-scaling group has even triggered a scale-up event and created the instances. There's no way for Terraform to know about the individual instances, since Terraform isn't managing those instances, the auto-scaling group is managing them. You would need to use another tool, like the AWS CLI, to get the instance IDs.

Related

Triggering a Lambda once a DMS Replication Task has completed in Terraform

I would like to trigger a Lambda once an RDS Replication Task has successfully completed. I have the following Terraform code, which successfully creates all the assets, but my Lambda is not being triggered.
resource "aws_dms_event_subscription" "my_event_subscription" {
enabled = true
event_categories = ["state change"]
name = "my-event-subscription"
sns_topic_arn = aws_sns_topic.my_event_subscription_topic.arn
source_ids = ["my-replication-task"]
source_type = "replication-task"
}
resource "aws_sns_topic" "my_event_subscription_topic" {
name = "my-event-subscription-topic"
}
resource "aws_sns_topic_subscription" "my_event_subscription_topic_subscription" {
topic_arn = aws_sns_topic.my_event_subscription_topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.my_lambda_function.arn
}
resource "aws_sns_topic_policy" "allow_publish" {
arn = aws_sns_topic.my_event_subscription_topic.arn
policy = data.aws_iam_policy_document.allow_dms_and_events_document.json
}
resource "aws_lambda_permission" "allow_sns_invoke" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.my_lambda_function.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.my_event_subscription_topic.arn
}
data "aws_iam_policy_document" "allow_dms_and_events_document" {
statement {
actions = ["SNS:Publish"]
principals {
identifiers = [
"dms.amazonaws.com",
"events.amazonaws.com"
]
type = "Service"
}
resources = [aws_sns_topic.my_event_subscription_topic.arn]
}
}
Am I missing something?
Is event_categories = ["state change"] correct? (This suggests state change is correct.
I'm less concerned right now if the Lambda is triggered for every state change, and not just DMS-EVENT-0079.)
Is there something I can add to get CloudWatch logs from the event subscription, to tell me what's wrong?
You can try giving it a JSON as shared on AWS Documentation.
{
"version":"0",
"id":"11a11b11-222b-333a-44d4-01234a5b67890",
"detail-type":"DMS Replication Task State Change",
"source":"aws.dms",
"account":"0123456789012",
"time":"1970-01-01T00:00:00Z",
"region":"us-east-1",
"resources":[
"arn:aws:dms:us-east-1:012345678901:task:AAAABBBB0CCCCDDDDEEEEE1FFFF2GGG3FFFFFF3"
],
"detail":{
"type":"ReplicationTask",
"category":"StateChange",
"eventType":"REPLICATION_TASK_STARTED",
"eventName":"DMS-EVENT-0069",
"resourceLink":"https://console.aws.amazon.com/dms/v2/home?region=us-east-1#taskDetails/taskName",
"detailMessage":"Replication task started, with flag = fresh start"
}
}
You can check how to give this as JSON in Terraform here

UPDATED - Terraform OCI - create multiple VCN in different regions

I would like to create 2 VCN and other resources inside two or more regions.
I upload my code inside this github account
When i execute the code (you have to set the tenancy, user, fingerprint, etc) i don't have errors, but:
When I go to the root region, all is created (compartment and VCN)
when I go to the second region, the VCN is not created
terraform version: v1.0.2
my VCN module has:
terraform {
required_providers {
oci = {
source = "hashicorp/oci"
version = ">= 1.0.2"
configuration_aliases = [
oci.root,
oci.region1
]
}
}
}
And when i call the VCN module I pass:
module "vcn" {
source = "./modules/vcn"
providers = {
oci.root = oci.home
oci.region1 = oci.region1
}
...
...
And my providers are:
provider "oci" {
alias = "home"
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
region = local.json_data.TERRAFORM_work.region
}
provider "oci" {
alias = "region1"
region = var.region1
tenancy_ocid = local.json_data.TERRAFORM_work.tenancy_ocid
user_ocid = local.json_data.TERRAFORM_work.user_ocid
private_key_path = local.json_data.TERRAFORM_work.private_key_path
fingerprint = local.json_data.TERRAFORM_work.fingerprint
}
What should i change, to create this VCN inside the two regions or more, at the same time?
using the terraform plan and apply
Thanks so much
Your module module.vcn does not declare any provider. From docs:
each module must declare its own provider requirements,
So you have to add to your module something like:
terraform {
required_providers {
oci = {
source = "source_for-oci"
version = ">= your_version"
}
}
}

Terraform - Create ec2 instances in each availability zone

I am trying to create multiple ec2 instance with this script
resource "aws_instance" "my-instance" {
count = 3
ami = ...
instance_type = ...
key_name = ...
security_groups = ...
tags = {
Name = "my-instance - ${count.index + 1}"
}
}
This creates 3 instances. But all three are in same availability zones. I want to create one instance in each availability zone or one in each of the availability zone that I provide. How can I do it?
I read that I can use
subnet_id = ...
option to specify the availability zone where the instance should be created. But I am not able to figure out how to loop through instance creation (which is currently being handled by count parameter) and specifiy different subnet_id
Can someone help please.
There are several ways of accomplishing this. What I would recommend is to create a VPC with 3 subnets and place an instance in each subnet:
# Specify the region in which we would want to deploy our stack
variable "region" {
default = "us-east-1"
}
# Specify 3 availability zones from the region
variable "availability_zones" {
default = ["us-east-1a", "us-east-1b", "us-east-1c"]
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
# Create a VPC
resource "aws_vpc" "my_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "my_vpc"
}
}
# Create a subnet in each availability zone in the VPC. Keep in mind that at this point these subnets are private without internet access. They would need other networking resources for making them accesible
resource "aws_subnet" "my_subnet" {
count = length(var.availability_zones)
vpc_id = aws_vpc.my_vpc.id
cidr_block = cidrsubnet("10.0.0.0/16", 8, count.index)
availability_zone = var.availability_zones[count.index]
tags = {
Name = "my-subnet-${count.index}"
}
}
# Put an instance in each subnet
resource "aws_instance" "foo" {
count = length(var.availability_zones)
ami = ...
instance_type = "t2.micro"
subnet_id = aws_subnet.my_subnet[count.index].id
tags = {
Name = "my-instance-${count.index}"
}
}

launching aws elb instace using terraform

I am new to terraform and aws, all I want to do is launch an aws ec2 instance with elastic load balancer with terraform. I get some of the configuration examples from various sites but don't know what is right way to implement those configurations, what should be the folder structure and everything. I had done it using GUI of aws but not getting much help with terraform.
Here the server should be apache2.
Any help is appreciated.
Thanks in advance
As per your requirement of creating Elastic Loadbalancer, using terraform. You will need to create the following resources.
If you already have the EC2 instance created and just want to attach them to ELB.
Create Target Group
Create ELB
Assign the Target Group to your ELB
Register your existing instance to your Target Group
If you don't have any instance created,
Create Target Group
Create ELB
Assign the Target Group to your ELB
Create Launch Template/Configuration
Create ASG, assign the ELB to ASG
The new instance created through ASG will auto-register to the ELB target group.
Terraform Resource example,
Launch Configuration
resource "aws_launch_configuration" "Your_Launch_Configuration" {
name = "launch_conf_name"
instance_type = "Instance_Type"
image_id = "AMI_image_id"
key_name = "Key_Name"
security_groups = "security_groups_id"
user_data = "User Data"
iam_instance_profile = "Instance IAM Role"
}
Auto Scaling Group
resource "aws_autoscaling_group" "Your_ASG" {
name = "ASG Name"
launch_configuration = aws_launch_configuration.Your_Launch_Configuration.id
max_size = "Max size"
min_size = "Min Size"
desired_capacity = "Desired Capacity"
vpc_zone_identifier = "Your Subnet List"
tags = [{
"key" = "Name"
"value" = "ASG Name"
"propagate_at_launch" = true
}]
health_check_grace_period = "300"
target_group_arns = "set of your ELB target Group"
}
Load Balancer Target Group
resource "aws_load_balancer_target_group" "Your_target_group" {
name = "Target_group_name"
port = "80"
protocol = "HTTP"
vpc_id = "Your_vpcid"
tags = {
name = "Target_group_name"
}
health_check {
enabled = true
interval = 300 # health check interval
protocol = "HTTP"
timeout = 300 # timeout seconds
path = "/" # your health check path
}
}
Load Balancer
resource "aws_load_balancer" "your_load_balancer" {
name = load_balancer_name
load_balancer_type = "application"
internal = true # if not internet facing
subnets = ["List of your subnet id"]
security_groups = ["List of your security group id"]
tags = {
"name" = load_balancer_Target_group_name
}
}
Load Balancer Listner
resource "aws_load_balancer_listener" "your_load_balancer_Listner" {
load_balancer_arn = listner_load_balancer_arn #arn of your load balancer
port = "80"
protocol = "http"
default_action {
target_group_arn = listner_Target_group_arn # arn of your target group
type = "forward"
}
}

Terraform: Cannot create spot instance. Error: MaxSpotInstanceCountExceeded

I am trying to create a spot instance in Terraform and the terraform code appears to be fine but I keep getting an error back saying MaxSpotInstanceCountExceeded.
NOTE: Right now this is just a test hence I am not including security groups, IPs, etc etc.
Steps I have taken:
Checked that I have 0 spot instance requests created in console.
Tried logging in to the console and creating a spot instance request. It works just fine.
Cancelled the spot instance request to ensure that I now have 0 spot instance requests.
Now I try and create virtually the same spot instance with the terraform script below, but I get the error: MaxSpotInstanceCountExceeded
Does anyone know why Terraform (or maybe AWS?) is not allowing me to create the spot instance using the terraform script, but it works just fine from the console?
Thanks!
provider "aws" {
profile = "terraform_enterprise_user"
region = "us-east-2"
}
resource "aws_spot_instance_request" "MySpotInstance" {
# Spot Request Settings
wait_for_fulfillment = "true"
spot_type = "persistent"
instance_interruption_behaviour = "stop"
# Instance Settings
ami = "ami-0520e698dd500b1d1"
instance_type = "c4.large"
associate_public_ip_address = "1"
root_block_device {
volume_size = "10"
volume_type = "standard"
}
ebs_block_device {
device_name = "/dev/sdb"
volume_size = "50"
volume_type = "standard"
delete_on_termination = "true"
}
tags = {
Name = "MySpotInstance"
Application = "MyApp"
Environment = "TEST"
}
}

Resources