Terraform - Shared volume on AWS - amazon-ec2

Trying to attach a shared volume to my auto-scaling group. Not sure how to get that done in terraform, or even if that is possible?
The aws_volume_attachment takes on one instance id, but I am expecting to put this in launch config somehow. Can someone please help.
resource "aws_ebs_volume" "shared_volume" {
availability_zone = "us-east-1"
size = 2
}
resource "aws_volume_attachment" "volume_attachment" {
device_name = "/dev/xvdb"
instance_id = "????"
volume_id = "${aws_ebs_volume.shared_volume.id}"
skip_destroy = true
}
resource "aws_launch_configuration" "flume-conf" {
image_id = "${var.app_ami_id}"
instance_type = "${var.app_instance_type}"
key_name = "${var.ssh_key_name}"
security_groups = ["${var.app_security_group_id}"]
user_data = "${data.template_file.config.rendered}"
iam_instance_profile = "${var.app_iam_role}"
root_block_device {
volume_size = 50
volume_type = "gp2"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "ec2_asg" {
name = "${format("%s", var.app_name)}"
launch_configuration = "${aws_launch_configuration.flume-conf.name}"
min_size = "${var.asg_min_size}"
max_size = "${var.asg_max_size}"
vpc_zone_identifier = ["${element(data.aws_subnet_ids.private.ids, 0)}", "${element(data.aws_subnet_ids.private.ids, 1)}"]
availability_zones = "${var.availability_zones}"
depends_on = []
lifecycle {
create_before_destroy = false
}
}

EBS Volumes cannot be mounted to multiple hosts. If you are looking for this functionality, consider EFS.

Related

Terraform - passing vars between different resources

I have one resource to create ec2 instance, and another one for creating ebs (with attach inside).
In the ebs I need to give it instance_id = aws_instance.ec2.id which is created in the ec2 resource. How can I pass the var value from one resource to the other one?
I'm using modules with tfvars file to consume bout resources to create ec2 instance with external disk.
ec2 main file:
# NIC
resource "aws_network_interface" "nic" {
subnet_id = "${var.subnet_id}"
private_ips = "${var.ip_address}"
tags = { Name = var.tags }
}
# EC2 Instance
resource "aws_instance" "ec2" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
iam_instance_profile = "${var.iam_instance_profile}"
tags = { Name = var.tags }
key_name = "${var.key_name}"
network_interface {
network_interface_id = "${aws_network_interface.nic.id}"
device_index = 0
}
}
External disk main file:
resource "aws_ebs_volume" "external_disk" {
availability_zone = "${var.availability_zone}"
size = "${var.disk_size}"
type = "${var.disk_type}"
tags = { Name = var.tags }
}
resource "aws_volume_attachment" "disk_attach" {
device_name = "${var.device_name}"
volume_id = aws_ebs_volume.external_disk.id
instance_id = aws_instance.ec2.id
}
Main env module:
module "external_disk_red" {
source = "../source-modules/external-disk"
availability_zone = "${var.availability_zone}"
size = "${var.disk_size_red}"
type = "${var.disk_type_red}"
}
module "red" {
source = "../source-modules/ec2"
region = "${var.region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
iam_instance_profile = "${var.iam_instance_profile}"
tags = "${var.tags_red}"
key_name = "${var.key_name}"
ip_address = "${var.ip_address_red}"
subnet_id = "${var.subnet_id}"
device_name = "${var.volume_path_red}"
}
What you would have to do in this situation is add an output to your ec2 module and use it as an input (variable) in your external disk module.
Also, I don't know what version of Terraform you are using, but using double quotes to refer to a variable is considered legacy (unless you want to do interpolation).
So...
source-modules/ec2/output.tf
output "instance_id" {
value = aws_instance.ec2.id
description = "ID of the EC2 instance"
}
source-modules/external-disk/variables.tf
variable "instance_id" {
type = string
description = "ID of the EC2 instance"
}
source-modules/external-disk/main.tf
resource "aws_ebs_volume" "external_disk" {
availability_zone = var.availability_zone
size = var.disk_size
type = var.disk_type
tags = { Name = var.tags }
}
resource "aws_volume_attachment" "disk_attach" {
device_name = var.device_name
volume_id = aws_ebs_volume.external_disk.id
instance_id = var.instance_id
}
Main env module
module "external_disk_red" {
source = "../source-modules/external-disk"
availability_zone = var.availability_zone
size = var.disk_size_red
type = var.disk_type_red
instance_id = module.red.instance_id
}
module "red" {
source = "../source-modules/ec2"
region = var.region
access_key = var.access_key
secret_key = var.secret_key
ami = var.ami
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
tags = var.tags_red
key_name = var.key_name
ip_address = var.ip_address_red
subnet_id = var.subnet_id
device_name = var.volume_path_red
}

How to create multiple ec2 instance across different subnets and Az zones with private ip address

I have a slight issue. I have 3 subnet cidr blocks with 3 different az groups. I am passing a list of static Ip address for these instance to get assigned in order. My code however is throwing "ip address is out of range for the subnet" which makes sense because it's just going from ip 0 to the N instance how can i properly make sure the instance gets placed in the proper subnet. I hope my question makes sense and is clear please see the code below thank you all for the help! The code belows creates the instance
### Start of Radient FID Server ###
resource "aws_instance" "FID" {
depends_on = [aws_kms_key.aws-wm-wmad-prod]
disable_api_termination = false
count = var.How_many_FID
ami = var.windows_dc_ami_2016
availability_zone = element(var.availability_zones, count.index)
ebs_optimized = var.windows_dc_ebs_optimized
instance_type = var.windows_dc_instance_type_FID
key_name = var.Key_Pair_Ec2
monitoring = true
vpc_security_group_ids = [aws_security_group.Private01.id]
subnet_id = element(aws_subnet.private_subnet_cidr_blocks_Apps, count.index).id
private_ip = "${lookup(var.ips,count.index)}"
associate_public_ip_address = false
tags = merge(
{
Name = element(var.Radiant_FID_Server_Tags, count.index)
Project = var.project,
Environment = var.environment
},
var.tags
)
I have a variables file which i'm passing the private Ip addresses:
variable "ips" {
default = {
"0" = "10.7.90.79"
"1" = "10.7.90.80"
"2" = "10.7.90.81"
"3" = "10.7.90.82"
"4" = "10.7.90.90"
"5" = "10.7.90.84"
"6" = "10.7.90.85"
"7" = "10.7.90.86"
"8" = "10.7.90.87"
"9" = "10.7.90.88"
}
}
##how i create my subnets
resource "aws_subnet" "private_subnet_cidr_blocks_AD" {
count = length(var.private_subnet_cidr_blocks_AD) # count = 3
vpc_id = aws_vpc.default.id #id34odfjdf
cidr_block = var.private_subnet_cidr_blocks_AD[count.index]
availability_zone = var.availability_zones[count.index]
tags = merge(
{
Name = element(var.private_subnet_cidr_blocks_AD_NameTag, count.index),
Project = var.project,
Environment = var.environment
},
var.tags
)
}
So let's say you have the following CIDR block list defined:
private_subnet_cidr_blocks_AD = ["10.7.90.64/27", "10.7.90.96/27","10.7.90.160/27"]
Then you could define your IP list like this:
variable "ips" {
default = [
{ subnet=0, ip="10.7.90.79" },
{ subnet=0, ip="10.7.90.80" },
{ subnet=0, ip="10.7.90.81" },
# etc...
{ subnet=1, ip="10.7.90.100" },
{ subnet=1, ip="10.7.90.101" },
# etc...
{ subnet=2, ip="10.7.90.170" },
{ subnet=2, ip="10.7.90.171" },
]
}
Each subnet number being the index in the private_subnet_cidr_blocks_AD list that corresponds to the CIDR block that the IP belongs to.
Then your instance definition could look like this:
resource "aws_instance" "FID" {
for_each = toset( var.ips )
subnet_id = aws_subnet.private_subnet_cidr_blocks_Apps[each.value.subnet].id
private_ip = each.value.ip

Unable to create EC2 using Terraform. Route Table Association stuck in creating mode

I am trying to create a simple infrastructure which includes EC2, VPC and internet connectivity with Internet Gateway, but while the infrastructure is being created through terraform apply the terminal output gets stuck in creating mode for approximately 5-6 minutes for route table association using subnet id and then finally throws error that vpc-id, routetableid, subnet id does not exist and not found.
Sharing some specific code below :
resource "aws_route_table" "dev-public-crt" {
vpc_id = "aws_vpc.main-vpc.id"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "aws_internet_gateway.dev-igw.id"
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = "aws_subnet.dev-subnet-public-1.id"
route_table_id = "aws_route_table.dev-public-crt.id"
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = "aws_vpc.dev-vpc.id"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
You need to remove the " around all the reference values you have there: vpc_id = "aws_vpc.main-vpc.id" should be vpc_id = aws_vpc.main-vpc.id, etc. Otherwise you try to create a aws_route_table in the vpc with the literal id "aws_vpc.main-vpc.id".
Whenever you want to reference variables or resources or data sources either do not wrap in " at all, or interpolate using "something ${aws_vpc.main-vpc.id} ..."
The result should probably look like:
resource "aws_route_table" "dev-public-crt" {
vpc_id = aws_vpc.main-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.dev-igw.id
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = aws_subnet.dev-subnet-public-1.id
route_table_id = aws_route_table.dev-public-crt.id
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = aws_vpc.dev-vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
No guarantee that this works because now there could be invalid references, but those need to cleaned up by you

Terraform AWS-EC2 security groups

I'm quite busy with trying to learn more about Terraform but I'm having one problem that I have no clue on how to work-around/fix.
The problem is as follows, in my script I am generating an ec2 instance (AWS) with a couple of side things like en EIP and a security group from a module, which is all working fine. But I cannot figure out how to attach the security group to the machine, now it's being created and that's it.
The code is as follows:
data "aws_ami" "latest" {
most_recent = true
owners = [ "self"]
filter {
name = "name"
values = [ lookup(var.default_ami, var.ami) ]
}
}
module "aws_security_group" {
source = "./modules/services/Security groups/"
server_port = 443
}
resource "aws_instance" "test-ec2deployment" {
ami = data.aws_ami.latest.id
instance_type = var.instance_type
subnet_id = var.subnet_id
availability_zone = var.availability_zone
associate_public_ip_address = var.public_ip
root_block_device {
volume_type = "gp2"
volume_size = 60
delete_on_termination = true
}
tags = {
Name = "Testserver2viaTerraform"
}
}
resource "aws_eip" "ip" {
instance = aws_instance.test-ec2deployment.id
}
resource "aws_eip" "example" {
vpc = true
}
Above is the main file and i'm loading the following module:
resource "aws_security_group" "my-webserver" {
name = "webserver"
description = "Allow HTTP from Anywhere"
vpc_id = "vpc-"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-webserver"
Site = "my-web-site"
}
}
The last step is attaching the security group to the machine but again, no clue on how to do that. I've been reading several docs and tried to google but I cannot seem to find the answer or the answer does not work for me. So hopefully you guys can help me further.
Thanks for your time, much appreciated!
In aws_security_group module you need to output security group id by add the following in ./modules/services/Security groups//main.tf
output "securitygroup_id" {
value = aws_security_group.my-webserver.id
}
then in your main tf file attach security group to your instance like this:
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = module.aws_security_group.securitygroup_id
network_interface_id = aws_instance.test-ec2deployment.primary_network_interface_id
}

How can I add a tag to AWS EBS when creating through EC2 with Terraform?

I'm trying to create an EC2 instance for a TEST environment, which uses an AMI of PROD. Everything is creating correctly, but I can't add figure out how to add tags to the EBS volumes that are created along with it?
The tags work on the EC2 but don't get applied to the EBS or root volume. I tried adding a tag map on those as well but that was invalid. Any ideas?
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "existing_sft_ami" {
most_recent = true
filter {
name = "name"
values = [var.prod_name]
}
owners = [
var.aws_account_id]
}
data "aws_subnet" "subnet" {
id = var.aws_subnet_id
}
resource "aws_instance" "sftp" {
ami = data.aws_ami.existing_sft_ami.id
instance_type = "t2.micro"
availability_zone = var.availability_zone
subnet_id = data.aws_subnet.subnet.id
key_name = var.ssh_key_name
vpc_security_group_ids = [var.aws_security_group_id]
root_block_device {
delete_on_termination = true
}
ebs_block_device {
device_name = "/dev/sdb"
delete_on_termination = true
}
tags = {
Name = var.name
Owner = var.owner
Created = formatdate("DD MMM YYYY hh:mm ZZZ", timestamp())
Environment = "TEST"
}
}
You need to use the additional volume_tags argument to tag the volumes. Also, to make your code a little more DRY, you can do this with a locals block.
locals {
tags = {
Name = var.name
Owner = var.owner
Created = formatdate("DD MMM YYYY hh:mm ZZZ", timestamp())
Environment = var.environment
}
}
resource "aws_instance" "sftp" {
ami = data.aws_ami.existing_sft_ami.id
instance_type = "t2.micro"
availability_zone = var.availability_zone
subnet_id = data.aws_subnet.subnet.id
key_name = var.ssh_key_name
vpc_security_group_ids = [var.aws_security_group_id]
root_block_device {
delete_on_termination = true
}
ebs_block_device {
device_name = "/dev/sdb"
delete_on_termination = true
}
tags = local.tags
volume_tags = local.tags
}
You can add tags attributes to root_block_device and ebs_block_device as well, that will give you more control in case you don't want to apply the same set of tags to all your block devices (which volume_tags will do).
E.g.:
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "existing_sft_ami" {
most_recent = true
filter {
name = "name"
values = [var.prod_name]
}
owners = [
var.aws_account_id]
}
data "aws_subnet" "subnet" {
id = var.aws_subnet_id
}
resource "aws_instance" "sftp" {
ami = data.aws_ami.existing_sft_ami.id
instance_type = "t2.micro"
availability_zone = var.availability_zone
subnet_id = data.aws_subnet.subnet.id
key_name = var.ssh_key_name
vpc_security_group_ids = [var.aws_security_group_id]
root_block_device {
delete_on_termination = true
tags = {
Name = "${var.name}-root-volume"
Owner = var.owner
Created = formatdate("DD MMM YYYY hh:mm ZZZ", timestamp())
Environment = "TEST"
}
}
ebs_block_device {
device_name = "/dev/sdb"
delete_on_termination = true
tags = {
Name = "${var.name}-secondary-volume"
Owner = var.owner
Created = formatdate("DD MMM YYYY hh:mm ZZZ", timestamp())
Environment = "TEST"
}
}
tags = {
Name = var.name
Owner = var.owner
Created = formatdate("DD MMM YYYY hh:mm ZZZ", timestamp())
Environment = "TEST"
}
}
Like #ben-whaley said, you can avoid some repetition by defining the common set of tags as a local variable - you can combine these with tags specific to your block device using merge, as follows:
tags = merge(local.tags, {
Name = "${var.name}-secondary-volume"
})
Note that there's a bug in the AWS provider for Terraform that makes it impossible to update tags on any ebs_block_device once the instance has been created -- updating tags on the root_block_device works fine.

Resources