I may have come across some conflicting docs today.
When creating an Elastic Search domain with vpc options, the HashiCorp Terraform official docs (https://www.terraform.io/docs/providers/aws/r/elasticsearch_domain.html) say that the subnets is a list and even specify 2 subnets in their example. However when I specify 2 subnets I get an error (I tried 2 different ways of specifying the subnets list) -
vpc_options {
subnet_ids = "${var.private_subnet_ids}"
OR
subnet_ids = [
"${var.private_subnet_ids[0]}",
"${var.private_subnet_ids[1]}"
]
Both of them give me the same error -
Error: Error creating ElasticSearch domain: ValidationException: You must specify exactly one subnet.
status code: 400, request id: 98b49b34-2da8-11ea-8114-e9488cc7cb63
on modules/es/main.tf line 51, in resource "aws_elasticsearch_domain" "es":
51: resource "aws_elasticsearch_domain" "es" {
If I specify a single subnet, it works fine.
subnet_ids = ["${var.private_subnet_ids[0]}"]
I do however want to be able to specify both of my private subnets for the ES cluster.
Is there a way to do that ? I noticed a couple issues on github for this but the resolution was what was in the Terraform docs and that does not work for me. I am using v0.12.17 in case it matters.
variable private_subnet_ids is a list
variable "private_subnet_ids" {
type = "list"
description = "The list of private subnets to place the instances in"
}
The original AWS documentation (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html) explains the structure behind the Terraform behavior. Each ES Domain can only connect to one subnet if it resides in a single Availability Zone. If you have Multi_AZ mode enabled in ES, you can supply a second subnet, which then must be in the other AZ that the ES cluster spans as well.
Related
My current Terraform setup consists of 3 template files. Each template file is linked to a launch configuration resource which is then used to launch instances via auto scaling events. In each template file, there is an AWS CLI command used to attach an existing EBS volume to the new instance being launched via autoscaling. I am having some trouble writing a conditional expression to pass in a variable to this AWS CLI command being used to attach a specific drive. Since I have 3 template files and 3 EBS volumes I'm looking to attach to each instance in its own autoscaling group, I don't believe I can have more than 2 expressions within my conditional expression. Any advice on how I can achieve this would be helpful.
Template_file
data "template_file" "ML_10_user_data" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}" // 3 templates
template = "${file("userdata.sh")}
vars {
ebs_volume = "${count.index == 0 ? vol-xxxxxxxxxxxxxxxxx : vol-xxxxxxxxxxxxxxxxx}" // how to include 3rd EBS volume?
}
}
Userdata.sh
#!/bin/bash
aws ec2 attach-volume --volume-id ${EBS_VOLUME} --instance_id `curl http://169.254.169.254/latest/meta-data/instance-id` --device /dev/sdf
EBS_VOLUME=${ebs_volume}
Any advice on how I can fulfill what I am trying to accomplish would be appreciated.
The best way would be to put it in a list:
variable "volumes" {
default = ["vol-1111","vol-2222","vol-333"]
}
data "template_file" "user_data" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}"
template = "${file("userdata.sh")}"
vars {
ebs_volume = "${var.volumes[count.index]}"
}
}
However, if these instances should be in ASG, this is not a very good design. Instances in ASG should be identical, interchangeable and disposable. They can be terminated at any time by AWS or scaling activities, and you should treat those instances as a group, not as individual entities.
I tried launching an ec2 instance using input parameters for the variables in terraform apply command. This creates the instance successfully. However, when I try to delete the instance using terraform destory, it executes but nothing gets deleted.
So I have a region variable with a default value. When I pass a different region in this variable using input parameters,instance launchesjust fine in the provided region but I am not able to terminate it using terraform destroy.
main.tf
variable "region" {
default = "us-west-1"
}
variable "ami" {
type = "map"
default = {
us-east-2 = "ami-02e680c4540db351e"
us-west-1 = "ami-011b6930a81cd6aaf"
}
}
provider "aws" {
region = "${var.region}"
}
resource "aws_instance" "web" {
ami = "${lookup(var.ami,var.region)}"
instance_type = "t2.micro"
tags {
Name = "naxi"
}
}
Terraform apply:
terraform apply -var region=us-east-2
Output of terraform destroy :
aws_instance.web: Refreshing state... (ID: i-05ca0514f61dcaf16)
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
Destroy complete! Resources: 0 destroyed.
Though it's able to lookup the instance id in the correct region, my guess is that it is trying to terminate the instance from the default region and not from the one I supplied as parameter.
Is there a way I can supply a parameter -var region=something with terraform destroy?
Destroy works as expected if I use the default values and no input parameters.
EDIT---
As soon as I give the this command: terraform destroy -varfile=variables.tfvars, all the instance related information from terraform.tfstate file gets removed and all the previous content of this file gets saved as backup to terraform.tfstate.backup. But still the instance is not deleted.
I think this is your main problem:
You ran apply with your "aws" provider defined one way (via a variable), but then you ran destroy with the same "aws" provider defined differently (you let the "region" variable default instead of specifying it).
As a result, terraform destroy looked in the wrong place (wrong AWS region) for your created resources.
Since terraform destroy was looking in the wrong place, it found nothing there.
Therefore terraform destroy saw that it did not need to destroy anything, just update its locally stored state information to reflect the absence of the resources.
Try these steps instead:
terraform apply -var 'region=us-east-2'
terraform destroy -var 'region=us-east-2'
This works for me, Terraform v0.12.2 + provider.aws v2.16.0.
I am guessing slightly here, but it seems like the point is probably that you, the Terraform user, are responsible for making sure you destroy with the exact same provider definitions you apply'd with.
And if you're using any variables to help define your providers, then this is something you will need to be especially mindful of, since you are making it easy to accidentally change provider definitions.
As a side note, I ran into a similar confusion myself. It seems to me that HashiCorp's Getting Started guide, in its current state, could do a better job of warning about this. It walks newbies through a very similar setup to yours, and currently appears to say nothing about how to destroy properly, or any potential pitfalls.
Perhaps you have multiple providers set. Try aliasing your provider and passing that into the resource.
provider "aws" {
region = var.region
alias = "mine"
}
resource "aws_instance" "web" {
provider = aws.mine
}
How Can I extract the atrribute values returned from list of peered virtual networks.
I executed this command and I need to extract the Network ID
list_all = network_client.virtual_network_peerings.list(
GROUP_NAME,
VNET_NAME
)
for peer in list_all:
print(peer)
and I get this value for from the print above:
{'additional_properties': {'type': 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings'},
'id': '/subscriptions/c70b9b-efd6-497d-98d8-e1e1d497425/resourceGroups/azure-sample-group-virtual-machines/providers/Microsoft.Network/virtualNetworks/azure-sample-vnet/virtualNetworkPeerings/sample-vnetpeer',
'allow_virtual_network_access': True,
'allow_forwarded_traffic': True,
'allow_gateway_transit': False,
'use_remote_gateways': False,
'remote_virtual_network': <azure.mgmt.network.v2018_08_01.models.sub_resource_py3.SubResource object at 0x048D6950>,
'remote_address_space': <azure.mgmt.network.v2018_08_01.models.address_space_py3.AddressSpace object at 0x048D68D0>,
'peering_state': 'Initiated',
'provisioning_state': 'Succeeded',
'name': 'sample-vnetpeer',
'etag': 'W/"653f7f94-3c4e-4275-bfdf-0bbbd9beb6e4"'}
How can I get this value "remote_virtual_network"?
My feeling is that your question is actually more a Python question than an Azure question. Assuming in your application this field is set with values, then remote_virtual_network is a SubResource meaning it only has one attribute: id
for peer in list_all:
remote_virtual_network_id = peer.remote_virtual_network.id
This guy is an actual virtual network, so if you want details about it you need to get it with network_client.virtual_networks.get:
https://learn.microsoft.com/en-us/python/api/azure-mgmt-network/azure.mgmt.network.v2018_08_01.operations.virtualnetworksoperations?view=azure-python#get
The tricky part is you get an ID, but VNet get asks for a RG name and VNet name, you can use the ARM ID parser for that:
https://learn.microsoft.com/en-us/python/api/msrestazure/msrestazure.tools?view=azure-python#parse-resource-id
'remote_virtual_network': ,
'remote_address_space':
I will try this out and get back:
This command is analogous to "get-azurermvirtualnetworkpeering -ResourceGroupName -VirtualNetworkName -Name" on Azure PowerShell.
remote_virtual_network you don't have one. You will only get this if you have Remote Gateway enabled and the Peer will learn the IP address of the Remote (On-premise site) that you are trying to connect.
To get this value, deploy a gateway in the Vnet and connect it to Say "Vnet-S2S-test" with a gateway deployed there as well.
Once, the Site-to-site between the Vnets are up. You can execute the command and you should see those fields populated with the local network gateway details.
When using Apache JClouds, how do you get launch a compute instance with a public IP? This is different from an elastic IP. Using the official AWS API you can do something like:
//create network information
InstanceNetworkInterfaceSpecification networkWithPublicIp = new InstanceNetworkInterfaceSpecification()
.withSubnetId(subnetId)
.withAssociatePublicIpAddress(true)
.withGroups( securityGroupIds )
.withDeviceIndex(0);
Once the node is launched, it will have a randomly assigned public IP (not elastic). Is there a way to do this with Jclouds and AWSEC2TemplateOptions?
There is an example in the doc at http://jclouds.apache.org/guides/aws/
// ex. to get an ip and associate it with a node
String ip = ec2Client.getElasticIPAddressServices().allocateAddressInRegion(node.getLocation().getId());
ec2Client.getElasticIPAddressServices().associateAddressInRegion(node.getLocation().getId(),ip, node.getProviderId());
Given I have an aws instance IP, how can I get the EC2 instance collection object via the ruby aws-sdk's filter option. For example
#ec2.instances.filter(valid_filter_name, ec2_instance_ip)
I've tried 'public_ip_address' and 'public_ip' as the filter name but those didn't work. I'm using this API doc http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/EC2/FilteredCollection.html#filter-instance_method, but it's does not mentioned what the valid parameters are.
It turns out the correct parameter to use (by trial & error) is 'ip-address'. Here's an example:
#ec2.instances.filter('ip-address', ec2_instance_ip)