aws_security_group - conditional ingress - amazon-ec2

I'm testing Terraform/Terragrunt to deploy RDS DB to AWS.
Is there a way to add conditional ingress to the aws_security_group definitions?
Terraform v0.12.3
Terragrunt version v0.19.8
As now the best I was able to do was add one security group for each condition, each with a count statement, and add all the single security groups to the DB instance, like
resource "aws_security_group" "db_sg_office" {
...
count = var.publicly_accessible ? 1 : 0
ingress {
...
cidr_blocks = ["1.2.3.4/32"]
}
}
...
resource "aws_db_instance" "default" {
...
vpc_security_group_ids = [ ... , "${aws_security_group.db_sg_office.id}" , ...]
...
}
This is actually NOT working and fails when the security group is referenced in the DB resource.

On terraform try to use aws_security_group_rule resource with count parameter, for additional reference read documentation
resource "aws_security_group" "db_sg_office" {
...
}
resource "aws_security_group_rule" "open_public" {
security_group_id = aws_security_group.db_sg_office.id
count = var.publicly_accessible ? 1 : 0
type = "ingress"
from_port = 0
to_port = 65535
cidr_blocks = ["1.2.3.4/32"]
protocol = "tcp"
}

Related

Terraform Throwing 'InvalidParameterValue' Address x.x.x.x does not fal within the subnet's address range.. it does

Having an issue with a Terraform deployment. I have a module that creates a network, and then another module that creates a series of ec2 instances. These servers are required to have specific IP addresses, which are called out in the module (I would rather dynamically set these but for now they are 'hardcoded'). However, I am getting a warning that the IP address I am associating with the ec2 instance 'does not fall within the subnet's address range', but it is. Here is the basic breakdown:
servers
->main.tf
->variables.tf
->outputs.tf
network
->main.tf
->variables.tf
->outputs.tf
main.tf
The relevant bits are as follows:
network main.tf
# Create VPC
resource "aws_vpc" "foo" {
cidr_block = "192.168.1.0/24"
enable_dns_hostnames = "true"
enable_dns_support = "true"
tags = {
Name = "foo"
}
}
# Create a Subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.foo.id
cidr_block = "192.168.1.0/24"
availability_zone = "ca-central-1a"
tags = {
Name = "subnet-1"
}
}
servers main.tf
resource "aws_instance" "bar" {
ami = var.some_ami
instance_type = "t3.medium"
associate_public_ip_address = true
private_ip = "192.168.1.15"
# root disk
root_block_device {
volume_size = "60"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = "bar"
}
}
main.tf
module "network" {
source = "./network"
}
module "servers" {
source = "./servers"
subnet_id = module.network.aws_subnet
}
Everything works correctly, and I verified in AWS that the VPC is created, and the subnet is created, but for some reason when the server is getting created I get the following error:
│ Error: creating EC2 Instance: InvalidParameterValue: Address 192.168.1.15 does not fall within the subnet's address range status code: 400
I left out some of the irrelevant bits of the tfs but everything else works as expected except this one thing. Anyone know whats going on?
Your aws_instance resource does not have subnet_id attribute. So instances are being launched in default subnet.
Add subnet_id attribute as below
resource "aws_instance" "bar" {
ami = var.some_ami
instance_type = "t3.medium"
associate_public_ip_address = true
subnet_id = "your_subnet_id"
private_ip = "192.168.1.15"
# root disk
root_block_device {
volume_size = "60"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = "bar"
}
}
You could also use data resource to get the subnet id.
data "aws_subnet" "selected" {
filter {
name = "tag:Name"
values = ["myawesomesubnet"]
}
}

Provisioning Windows VM including File Provisioner for AWS using Terraform results in Timeout

I'm aware that there already exists several posts similar to this one - I've went through them and adapted my Terraform configuration file, but it makes no difference.
Therefore, I'd like to publish my configuration file and my use case: I'd like to provision a (Windows) Virtual Machine on AWS, using Terraform. It works without the File Provisioning part - including them, the provisioning results in a timeout.
This includes adaptations from previous posts:
SSH connection restriction
SSH isnt working in Windows with Terraform provisioner connection type
Usage of a Security group
Terraform File provisioner can't connect ec2 over ssh. timeout - last error: dial tcp 92.242.xxx.xx:22: i/o timeout
I also get a timeout when using "winrm" instead of "ssh".
I'd be happy if you could provide any hint for following config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
access_key = "<my access key>"
secret_key = "<my secret key>"
region = "eu-central-1"
}
resource "aws_instance" "webserver" {
ami = "ami-07dfec7a6d529b77a"
instance_type = "t2.micro"
security_groups = [aws_security_group.sgwebserver.name]
key_name = aws_key_pair.pubkey.key_name
tags = {
"Name" = "WebServer-Win"
}
}
resource "null_resource" "deployBundle" {
connection {
type = "ssh"
user = "Administrator"
private_key = "${file("C:/Users/<my user name>/aws_keypair/aws_instance.pem")}"
host = aws_instance.webserver.public_ip
}
provisioner "file" {
source = "files/test.txt"
destination = "C:/test.txt"
}
depends_on = [ aws_instance.webserver ]
}
resource "aws_security_group" "sgwebserver" {
name = "sgwebserver"
description = "Allow ssh inbound traffic"
ingress {
from_port = 0
to_port = 6556
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sgwebserver"
}
}
resource "aws_key_pair" "pubkey" {
key_name = "aws-cloud"
public_key = file("key/aws_instance.pub")
}
resource "aws_eip" "elasticip" {
instance = aws_instance.webserver.id
}
output "eip" {
value = aws_eip.elasticip.public_ip
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}
Thanks a lot in advance!
Windows EC2 instances don't support SSH, they support RDP. You would have to install SSH server software on the instance before you could SSH into it.
I suggest doing something like placing the file in S3, and using a user data script to trigger the Windows EC2 instance to download the file on startup.

launching aws elb instace using terraform

I am new to terraform and aws, all I want to do is launch an aws ec2 instance with elastic load balancer with terraform. I get some of the configuration examples from various sites but don't know what is right way to implement those configurations, what should be the folder structure and everything. I had done it using GUI of aws but not getting much help with terraform.
Here the server should be apache2.
Any help is appreciated.
Thanks in advance
As per your requirement of creating Elastic Loadbalancer, using terraform. You will need to create the following resources.
If you already have the EC2 instance created and just want to attach them to ELB.
Create Target Group
Create ELB
Assign the Target Group to your ELB
Register your existing instance to your Target Group
If you don't have any instance created,
Create Target Group
Create ELB
Assign the Target Group to your ELB
Create Launch Template/Configuration
Create ASG, assign the ELB to ASG
The new instance created through ASG will auto-register to the ELB target group.
Terraform Resource example,
Launch Configuration
resource "aws_launch_configuration" "Your_Launch_Configuration" {
name = "launch_conf_name"
instance_type = "Instance_Type"
image_id = "AMI_image_id"
key_name = "Key_Name"
security_groups = "security_groups_id"
user_data = "User Data"
iam_instance_profile = "Instance IAM Role"
}
Auto Scaling Group
resource "aws_autoscaling_group" "Your_ASG" {
name = "ASG Name"
launch_configuration = aws_launch_configuration.Your_Launch_Configuration.id
max_size = "Max size"
min_size = "Min Size"
desired_capacity = "Desired Capacity"
vpc_zone_identifier = "Your Subnet List"
tags = [{
"key" = "Name"
"value" = "ASG Name"
"propagate_at_launch" = true
}]
health_check_grace_period = "300"
target_group_arns = "set of your ELB target Group"
}
Load Balancer Target Group
resource "aws_load_balancer_target_group" "Your_target_group" {
name = "Target_group_name"
port = "80"
protocol = "HTTP"
vpc_id = "Your_vpcid"
tags = {
name = "Target_group_name"
}
health_check {
enabled = true
interval = 300 # health check interval
protocol = "HTTP"
timeout = 300 # timeout seconds
path = "/" # your health check path
}
}
Load Balancer
resource "aws_load_balancer" "your_load_balancer" {
name = load_balancer_name
load_balancer_type = "application"
internal = true # if not internet facing
subnets = ["List of your subnet id"]
security_groups = ["List of your security group id"]
tags = {
"name" = load_balancer_Target_group_name
}
}
Load Balancer Listner
resource "aws_load_balancer_listener" "your_load_balancer_Listner" {
load_balancer_arn = listner_load_balancer_arn #arn of your load balancer
port = "80"
protocol = "http"
default_action {
target_group_arn = listner_Target_group_arn # arn of your target group
type = "forward"
}
}

How to whitelist Atlassian/Bitbucket IPs in AWS EC2 security group?

We want Bitbucket webhooks to trigger our CI tool which runs on an AWS EC2 instance, protected with ingress rules from general access.
Bitbucket provides a page listing their IP addresses at https://support.atlassian.com/bitbucket-cloud/docs/what-are-the-bitbucket-cloud-ip-addresses-i-should-use-to-configure-my-corporate-firewall/
They also have a machine-consumable version at https://ip-ranges.atlassian.com/ for Atlassian IPs in general.
I wonder, what is an efficient approach to add and maintain this list in AWS EC2 security groups, e.g. via terraform.
I ended up scraping the machine-consumable json from their page, and let terraform manage the rest. The step of getting the json is left as a manual task.
resource "aws_security_group_rule" "bitbucket-ips-sgr" {
security_group_id = "your-security-group-id"
type = "ingress"
from_port = 443
to_port = 443
protocol = "TCP"
cidr_blocks = local.bitbucket_cidrs_ipv4
ipv6_cidr_blocks = local.bitbucket_cidrs_ipv6
}
locals {
bitbucket_cidrs_ipv4 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) == 0
]
bitbucket_cidrs_ipv6 = [for item in local.bitbucket_ip_ranges_source.items:
# see https://stackoverflow.com/q/47243474/1242922
item.cidr if length(regexall(":", item.cidr)) > 0
]
# the list originates from https://ip-ranges.atlassian.com/
bitbucket_ip_ranges_source = jsondecode(
<<JSON
the json output from the above URL
JSON
)
}
I improved on Richard's answer and wanted to add that TF's http provider can fetch the JSON for you, and, with a slight tweak to the jsondecode() call, that same for loop still plays.
provider "http" {}
data "http" "bitbucket_ips" {
url = "https://ip-ranges.atlassian.com/"
request_headers = {
Accept = "application/json"
}
}
locals {
bitbucket_ipv4_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) == 0]
bitbucket_ipv6_cidrs = [for c in jsondecode(data.http.bitbucket_ips.body).items : c.cidr if length(regexall(":", c.cidr)) > 0]
}
output "ipv4_cidrs" {
value = local.bitbucket_ipv4_cidrs
}
output "ipv6_cidrs" {
value = local.bitbucket_ipv6_cidrs
}

attaching different Security Groups to different EC2s

Requirement:-
I have multiple group(say 2 groups) of EC2s where each group contain 6 EC2. and I have to attach different SG to each group.
Example:-
Group1 contains :- Head1, child :EC2-1, EC2-2....6 and need to attach SG1
Group2 contains :- Head2, child :EC2-3, EC2-4 ...6 and need to attach SG2
I don't want to write separate resource "aws_instance"
Head-Module
resource "aws_security_group" "sg" {
count = var.ec2_instance_count
name = "${local.account}${count.index}"
vpc_id = local.vpc_id
}
resource "aws_instance" "ec2_instance" {
count = var.ec2_instance_count
security_groups = [element(aws_security_group.sg.*.id, count.index)]
}
Child-Module:
data "aws_security_groups" "data_security_group" {
filter {
name = "group-name"
values = ["${local.account}${count.index}"]
}
}
resource "aws_instance" "ec2_child" {
count = var.ec2_instance_count*var.numberofchild
security_groups = [element(aws_security_group.data_security_group.*.id, count.index)]
}
Error: Error launching source instance: InvalidGroup.NotFound: The security group 'terraform-2020082
4151444795600000001' does not exist in VPC 'vpc-ghhje85abcy'
status code: 400, request id: 9260fd88-a03a-4c46-b67c-3287594cdab5
on main.tf line 68, in resource "aws_instance" "ec2_instance":
68: resource "aws_instance" "ec2_instance" {
Note: I am using data "aws_security_groups" instead of data "aws_security_group". If I use the later one, I know I will be able to get only one SG in the data resource and it throws me an error :multiple Security Groups matched; from which I kind of moved ahead by using data "aws_security_groups" and this error get vanished. but the latest error I m facing is: InvalidGroup.NotFound as mentioned above.
Update: I am able to use data resource and able to attach the different SG to different EC2. the only issue is random Sequencing. for all 6 EC2 of group 1 I want them to assign first SG and so on.
Don't use the data, instead create your resource "aws_security_group" using a count like you do on your resource "aws_instance" that way you can reference them directly...
resource "aws_security_group" "sg" {
count = var.ec2_instance_count
name = "${local.account}${count.index}"
vpc_id = local.vpc_id
}
resource "aws_instance" "ec2_instance" {
count = var.ec2_instance_count
security_groups = [element(aws_security_group.sg.*.id, count.index)]
}
Thanks Helder, I created the resource with count. its not the huge infrastructure but a fairly complex one. 8 groups (each group has 1 Parent and 6 child EC2)
There is 1 External SG for all parents. and 8 internal SG 1 for each group).
I had to follow a sequence of provisioning because the requirement was to pass the "Parent Host name" to respective group of Childs in "Childs User data" so I had to keep them in a separate modules and used "data" resources for reuse.
ParentModule:
resource "aws_instance" "ec2_instance" {
count = tonumber(var.mycount)
vpc_security_group_ids = [data.aws_security_group.external_security_group.id, element(data.aws_security_group.internal_security_group.*.id, count.index)]
...
}
resource "aws_security_group" "internal_security_group" {
count = tonumber(var.mycount)
name = "${var.internalSGname}${count.index}"
}
resource "aws_security_group" "external_security_group" {
name = ${var.external_sg_name}"
}
ChildModule: with Data resource and used dynamic map to assignment of SG to Proper group of EC2.
data "aws_security_group" "internal_security_group" {
count = tonumber(var.mycount)
filter {
name = "group-name"
values = "${var.internalSGname}${count.index}"]
}
}
resource "aws_instance" "ec2_child" {
count = local.child_count * tonumber(var.mycount)
vpc_security_group_ids = ["${element(data.aws_security_group.internal_security_group.*.id, "${lookup(local.SG_lookup, count.index, 99)}")}"]
variable.tf
locals{
SG_lookup = {
for n in range(0, (local.child_count * tonumber(var.mycount))) :
n => "${floor(((n) / local.child_count))}"
}
}
}

Resources