I'm trying to figure out how to create different number of EC2 instances between two different Terraform workspaces. My approach is to have all Terraform code in one Github branch. I would like to have one aws_instance block that creates a different number of instances that are also different sized EC2 instances. I plan on using a a TFVARS file for separate environments in this case to specify what size instances are to be used. Any advice on how to best approach this scenario would be helpful. I am using Terraform version 0.12.26
You can simply do similar like this: (3 instances for staging and 1 for other workspaces)
resource "aws_instance" "cluster_nodes" {
count = terraform.workspace == "staging" ? 3 : 1
ami = var.cluster_aws_ami
instance_type = var.cluster_aws_instance_type
# subnet_id = aws_subnet.cluster_subnet[var.azs[count.index]].id
subnet_id = var.public_subnet_ids[count.index]
vpc_security_group_ids = [aws_security_group.cluster_sg.id]
key_name = aws_key_pair.cluster_ssh_key.key_name
iam_instance_profile = "${aws_iam_instance_profile.cluster_ec2_instance_profile.name}"
associate_public_ip_address = true
tags = {
Name = "Cluster ${terraform.workspace} node-${count.index}"
}
}
Related
I am starting a Windows EC2 instance in AWS. Now I want to install certain software like OpenSSH and some other tasks like creating user after the server has been created. If I have a PowerShell script, how do I execute on the remote instance?
I have a local PowerShell script - install_sft.ps1 and I want to execute on the remote EC2 instance in AWS.
I know I need to use a "provisioner" but unable to get my head around how to use it for Windows.
resource "aws_instance" "win-master" {
provider = aws.lmedba-dc
ami = data.aws_ssm_parameter.WindowsAmi.value
instance_type = var.instance-type
key_name = "RPNVirginia"
associate_public_ip_address = true
vpc_security_group_ids = [aws_security_group.windows-sg.id]
subnet_id = aws_subnet.dc1.id
tags = {
Name = "Win server"
}
depends_on = [aws_main_route_table_association.set-master-default-rt-assoc]
}
You can do this by making use of the user_data parameter of the aws_instance resource:
resource "aws_instance" "win-master" {
...
user_data_base64 = "${base64encode(file(install_sft.ps1))}"
...
}
Just ensure that install_sft.ps1 is in the same directory as your Terraform code.
An EC2 instance's User Data script executes when it starts up for the first time. See the AWS documentation here for more details.
I've been having trouble trying to dynamically assign availability zones to the ec2 instances I create via Terraform.
Context : I've created a shell script to take in user input to specify the number of instances to create. Based on the number of instances, I want to assign each created instance to a specific availability zone using either a for or for_each loop.
shell script
##create ec2 on AWS
#!/bin/bash
echo "Starting EC2 Set up..."
echo "Specify number of EC2 instances to create : "
read number_of_instances
echo "You made a request for $number_of_instances to be created"
terraform validate
terraform apply -var instance_count=${number_of_instances}
main.tf
resource "aws_instance" "my-ec2-instance" {
ami = var.ami
count = var.instance_count
key_name = var.key_name
instance_type = var.instance_type
subnet_id = var.subnet_id
security_groups = [data.aws_security_group.my-sec-group.id]
availability_zone = ##How do I dynamically assign an availability zone here##
tags = {
Name = "my-ec2-instance-tag${count.index + 1}"
Project = "my terraform project"
}
}
Expected output for my ec2:
If i want 2 instances created
my-ec2-instance1 should be assigned to ap-southeast-1a
my-ec2-instance2 should be assigned to ap-southeast-1b
If i want 3 instances created
my-ec2-instance1 should be assigned to ap-southeast-1a
my-ec2-instance2 should be assigned to ap-southeast-1b
my-ec2-instance3 should be assigned to ap-southeast-1c
If i want 5 instances created
my-ec2-instance1 should be assigned to ap-southeast-1a
my-ec2-instance2 should be assigned to ap-southeast-1b
my-ec2-instance3 should be assigned to ap-southeast-1c
my-ec2-instance4 should be assigned to ap-southeast-1a
my-ec2-instance5 should be assigned to ap-southeast-1b
What I've tried so far :
variable.tf
variable "availability_zone_map" {
description = "Availability zone for instance"
type = map
default = {
"ap-southeast-1a" = 1
"ap-southeast-1b" = 2
"ap-southeast-1c" = 3
}
}
main.tf
resource "aws_instance" "my-ec2-instance" {
ami = var.ami
count = var.instance_count
key_name = var.key_name
instance_type = var.instance_type
subnet_id = var.subnet_id
security_groups = [data.aws_security_group.my-sec-group.id]
for_each = var.availability_zone_map
availability_zone = each.key
tags = {
Name = "my-ec2-instance-tag${count.index + 1}"
Project = "my terraform project"
}
}
This is definitely wrong as a resource will not be able to take in count and for_each at the same time.
Would appreciate if anyone could help me get around this? Thank you!
If the intention is to use ASG then Marcin's answer is the best direction.
If you still want to manage EC2 instances manually due to what ever reason (I was in similar situation where ASG was not an option) then following will get you what you want.
variable "availability_zones" {
description = "Availability zones for instance"
type = list
default = [
"ap-southeast-1a" = 1
"ap-southeast-1b" = 2
"ap-southeast-1c" = 3
]
}
resource "aws_instance" "my-ec2-instance" {
ami = var.ami
count = var.instance_count
key_name = var.key_name
instance_type = var.instance_type
subnet_id = var.subnet_id
security_groups = [data.aws_security_group.my-sec-group.id]
availability_zone = var.availability_zones[ count % var.instance_count]
tags = {
Name = "my-ec2-instance-tag${count.index + 1}"
Project = "my terraform project"
}
}
Based on the comments, the intent is to create an autoscaling group in AWS.
For that you need two components:
aws_launch_template
aws_autoscaling_group
Thus, instead of your "my-ec2-instance your would create corresponding aws_launch_template.
The template would be references in attribute launch_template of the ASG.
The instance_count would be assigned to desired_capacity of the ASG.
I am willing to launch two instances via Terraform. First one will generate some certificate files, push to S3 bucket. The second instance will pull those certificates from particular S3 bucket. Both operations will be handled by user data. The problem here is pull commands (aws cli) in user data of second instance are not working. (It is working when I try from shell) I think the issue is about terraform is launching both instances synchronously so that second instance is getting launched before first instance pushes the certificates to S3.
I also tried to handle this by adding "depends_on" to my code but it did not work. I am looking for a way to launch the instances asynchronously. Like second instance will be launched after 30 seconds then first instance is launched. Here I am pasting the related part of the code.
data "template_file" "first_executor" {
template = file("some_path/first_executor.sh")
}
resource "aws_instance" "first_instance" {
ami = data.aws_ami.amazon-linux-2.id
instance_type = "t2.micro"
user_data = data.template_file.first_executor.rendered
network_interface {
device_index = 0
network_interface_id = aws_network_interface.first_instance-network-interface.id
}
}
###
data "template_file" "second_executor" {
template = file("some_path/second_executor.sh")
}
resource "aws_instance" "second_instance" {
depends_on = [aws_instance.first_instance]
ami = data.aws_ami.amazon-linux-2.id
instance_type = "t2.micro"
user_data = data.template_file.second_executor.rendered
network_interface {
device_index = 0
network_interface_id = aws_network_interface.second-network-interface.id
}
}
Answer is no. "depends_on" in Terraform means it will wait for a resource to be available. This means, your second EC2 will be created as soon as first EC2 is triggered.
Terraform will not wait till your first EC2 is in "running" state or if user data is executed.
I would suggest go with depdens_on and then, in your second EC2 user data script, add some logic to have a loop which will look up S3 and will wait and repeat till the resources are found.
I'm relatively new to terraform and I'm trying to iterate over all aws_instances to apply a null_resource. Can you use multiple splats to access all instances, regardless of their names?
The EC2 instances are broken down by three types:
aws_instance.web.* (3 instances)
aws_instance.app.* (3 instances)
aws_instance.db.* (2 instances)
Here's my attempt to apply a null_resource to all eight aws_instances:
resource "null_resource" "install_security_package" {
#count = "${length(aws_instance)}" #terraform error: resource count can't reference variable: aws_instance
#count = "${length(aws_instance.*)}" #terraform error: resource variables must be three parts: TYPE.NAME.ATTR
count = "${length(aws_instance.*.*)}" #terraform error: unknown resource 'aws_instance.*'
connection {
type = "ssh"
host = "${element(aws_instance.*.private_ip, count.index)}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
It is not currently possible to match all resources of a given type. The "splat" syntax, as you've seen, only allows selecting all of the instances created from a particular resource block.
The closest you can get to this with Terraform today is to concatenate together the different resources:
concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)
In the current version of Terraform as of this answer it is necessary to use some of the workarounds shared in github issue #4084 in order to avoid duplicating that complex expression in multiple places. A forthcoming feature called Local Values will make this simpler in the near future, allowing the list to be given an name to be re-used in multiple places:
# Won't work until Terraform PR#15449 is merged and released
locals {
aws_instance_addrs = "${concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)}"
}
resource "null_resource" "install_security_package" {
count = "${length(local.aws_instance_addrs)}"
connection {
type = "ssh"
host = "${local.aws_instance_addrs[count.index]}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
I am trying to setup a remote deployment with Capistrano on the Amazon Cloud.
The idea : I SSH to a random machine of the autoscaling group and I want to deploy to all the other machines from there. In order to do that I need to get the names of the other instances so I can define the Capistrano servers I want to deploy to
I have installed the Ruby sdk but I cannot figure out the best way to retrieve the instances names (taking advantage that I am on the VPN).
I have actually two possibilities : either find the instances by tags (I have tagged them with "production") or by the ID of the autoscaling group.
I don't want to use other "big guns" like Chef, etc.
After reading too much documentation
Two strategies : retrieve the dns names by autoscaling group OR by tags
By Tags
ec2 = Aws::EC2::Client.new
instances_tagged = ec2.describe_instances(
dry_run: false,
filters: [
{
name: 'tag:environment',
values: ['production'],
},
{
name: 'tag:stack',
values: ['rails'],
}
],
)
dns_tagged = instances_tagged.reservations[0].instances.map(&:private_dns_name)
By Autoscaling group
as = Aws::AutoScaling::Client.new
instances_of_as = as.describe_auto_scaling_groups(
auto_scaling_group_names: ['Autoscaling-Group-Name'],
max_records: 1,
).auto_scaling_groups[0].instances
if instances_of_as.empty?
autoscaling_dns = []
else
instances_ids = instances_of_as.map(&:instance_id)
autoscaling_dns = instance_ids.map do |instance_id|
ec2.instances[instance_id].private_dns_name
end
end