EC2 instances recreated by Terraform when new AMI released - amazon-ec2

I have written some Terraform code to create some servers. For AMI I was using the Terraform data module to get the latest Ubuntu 16.04 image ID and assign it to the EC2 instances.
Recently I wanted to add another EC2 instance to this environment, however when I run terraform plan I can see that Terraform is trying to delete the existing EC2 instance and recreate them. The reason being that a new Ubuntu image has been released and it is trying to delete the old instance and create new ones with the new AMI ID.
Is there any chance I can address this issue as I don't want to accidentally delete our production servers?
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
module "jenkins" {
source = "terraform-aws-modules/ec2-instance/aws"
name = "Jenkins"
instance_count = 1
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.small"
associate_public_ip_address = true
disable_api_termination = true
key_name = "${aws_key_pair.ssh_key.key_name}"
monitoring = false
vpc_security_group_ids = "${module.jenkins_http_sg.this_security_group_id}", "${module.jenkins_https_sg.this_security_group_id}", "${module.ssh_sg.this_security_group_id}"]
subnet_id = "${module.vpc.public_subnets[0]}"
iam_instance_profile = "${aws_iam_instance_profile.update-dns-profile.name}"
tags = {
Terraform = "true"
}
}

While the answer above helps, I solved the problem by adding the following to the aws_instance resource.
lifecycle {
ignore_changes = ["ami"]
}
Please note if you are using the AWS module like I am using, you will have to enter this code to the main.tf file in .terraform/modules/.

Terraform is doing exactly as you asked it to do. Each time it runs it looks for the most recent AMI with a name beginning with ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-* and then passes that AMI ID to the aws_instance resource. As it's not possibly to modify the image ID of an instance, Terraform correctly determines it must destroy the old instances and rebuild them from the new AMI.
If you want to specify a specific AMI then you should either make the data source only return a single AMI (eg by specifying the date stamp in the name filter) or you should hardcode the AMI ID you want to use.
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190403"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
or:
variable "ami" {
default = "ami-0727f3c2d4b0226d5"
}
If you were to remove the most_recent = true parameter then instead your data source would find multiple images that match those criteria and then fail as the aws_ami data source can only return a single AMI:
NOTE: If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is specific enough to return a single AMI ID only, or use most_recent to choose the most recent one. If you want to match multiple AMIs, use the aws_ami_ids data source instead.
Also note that I added the owners field to your data source. This is now required since version 2.0.0 because otherwise this was very insecure as your data source could return any public image that uses that naming scheme.

Related

Not able to update lambda code on AWS console using terraform

I am creating lambda function using terraform as per the terraform syntax lambda code should be passed as a zip file. In a similar way, I am passing in a resource block and it is getting created also without any issue. But when I am trying to update lambda code using terraform in the next run it is not getting updated. Below block for reference.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
role = "..."
}
Need help to resolve this issue.
Set the source_code_hash argument, so Terraform will update the lambda function when the lambda code is changed.
resource "aws_lambda_function" "stop_ec2" {
source_code_hash = filebase64sha256("dest_dir/stop_ec2_upload.zip")

Create random resource name in Terraform

I'm using Terraform to create stuff in Azure,
In ARM I used to use uniqueString() to generate storage account names,
So is it possible to generate random name for storage account using Terraform?
There are several random resources you can use in Terraform
https://www.terraform.io/docs/providers/random/index.html
Resources
random_id
random_pet
random_shuffle
random_string
Use random_id as a sample, and the official codes in resource azurerm_storage_account
You can define the resource azurerm_storage_account name easily.
resource "random_id" "storage_account" {
byte_length = 8
}
resource "azurerm_storage_account" "testsa" {
name = "tfsta${lower(random_id.storage_account.hex)}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}

Using resource attributes as variable in Terraform templates

I'm trying to use the attributes of an EC2 instance as a variable, but it keeps failing or otherwise not working. As you can see below, I want to insert the private IP of the instance into a config file that will get copied to the instance. The remote-exec script will then move the file into place (/etc/vault.d/server-config.json)
instances.tf
resource "template_file" "tpl-vault-server-config" {
template = "${file("${path.module}/templates/files/vault-server-config.json.tpl")}"
vars {
aws_private_ip = "${aws_instance.ec2-consul-server.private_ip}"
}
}
provisioner "file" {
source = "${template_file.tpl-vault-server-config.rendered}"
destination = "/tmp/vault-server-config.json"
}
vault-server-config.json.tpl
backend "consul" {
address = "127.0.0.1:8500"
path = "vault"
tls_enable = 1
tls_ca_file = "/etc/consul.d/ssl/ca.cert"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
listener "tcp" {
address = "${aws_private_ip}:8200"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
The error on terraform plan is:
* aws_instance.ec2-consul-server: missing dependency: template_file.tpl-vault-server-config
Questions:
Am I taking the wrong approach?
Am I missing something basic?
How do you get an instance's attributes into a file?
Thanks in advance.
I realized that I was defining the template_file resource within the current resource and this was part of the problem. When I fixed that, things worked much easier.

How to force a new CNContact into a Local CNContainer?

I'm writing an app that stores contact-info fetched through REST and JSON into a container, using CNContactStore. I would like to keep these contacts separate from any other accounts, and only stored locally on the device, but a local store doesn't exist, and I can't find any way to create/activate one?
I'm able to get the default store's ID (as configured on the device, e.g. iCloud), using:
let store = CNContactStore()
let containerID = store.defaultContainerIdentifier()
...and I can (theoretically) identify a local container like this — if one actually exists:
var allContainers = [CNContainer]()
do {
allContainers = try store.containersMatchingPredicate(nil)
for container in allContainers {
if container.type == CNContainerType.Local {
print("Local container is: \(container.identifier)")
break
}
}
} catch {
print("Error fetching containers")
}
But no local container exists. Any ideas on how to store my contacts locally, or in a new separate container?
This was possible as follows with the now deprecated AB API, may still work as a workaround:
ABAddressBookRef addressBook = ABAddressBookCreateWithOptions( nil, nil );
ABRecordRef source = ABAddressBookGetSourceWithRecordID( addressBook, kABSourceTypeLocal );
ABRecordRef contact = ABPersonCreateInSource( source );
The containersMatchingPredicate(nil) returns the default container only.
I have a similar problem. If icloud has been configured, it would only return the CNContainerTypeCardDAV type container else the local container.

Using AWS API/SDK to Register new EC2 Instances with Existing Elastic Load Balancer - is it possible?

I'm working on using the .Net SDK to help automate the deployment of an application into Windows EC2 instances. The process I want to achieve is:
Create a new EC2 instance - this
"bootstraps" itself by loading in
the new application version using a
service.
Ensure the new instance is in the
'running' state
Run some simple acceptance tests on
the new instance.
Register the new instance with an
existing Elastic Load balancer that
has an instance running the old
version of the application.
When the new instance is registered
with the load balancer, de-register
the old instance.
Stop the old EC2 instance.
I've managed to get steps 1 and 2 working, and I'm pretty confident about 3 and 6.
To do this I've been writing a simple C# console app that uses the AWS .Net SDK v1.3.2 to make the various API calls.
However, when I get to step 4 I cannot get the new instance registered with the load balancer. Here is my code:
public IList<Instance> PointToNewInstance(string newInstanceId)
{
var allInstances = new List<Instance>();
using (var elbClient = ClientUtilities.GetElbClient())
{
try
{
var newInstances = new List<Instance> {new Instance(newInstanceId)};
var registInstancesRequest = new RegisterInstancesWithLoadBalancerRequest
{
LoadBalancerName = LoadBalancerName,
Instances = newInstances
};
var registerReponse = elbClient.RegisterInstancesWithLoadBalancer(registInstancesRequest);
allInstances = registerReponse.RegisterInstancesWithLoadBalancerResult.Instances;
var describeInstanceHealthRequest = new DescribeInstanceHealthRequest
{
Instances = newInstances
};
DescribeInstanceHealthResponse describeInstanceHealthResponse;
do
{
describeInstanceHealthResponse = elbClient.DescribeInstanceHealth(describeInstanceHealthRequest);
} while (describeInstanceHealthResponse.DescribeInstanceHealthResult.InstanceStates[0].State == "OutOfService");
_log.DebugFormat("New instance [{0}] now in service - about to stop remove old instance", newInstanceId);
if (allInstances.Any(i => i.InstanceId != newInstanceId))
{
elbClient.DeregisterInstancesFromLoadBalancer(new DeregisterInstancesFromLoadBalancerRequest
{
Instances = allInstances.Where(i => i.InstanceId != newInstanceId).ToList(),
LoadBalancerName = LoadBalancerName
});
foreach (var instance in allInstances.Where(i => i.InstanceId != newInstanceId).ToList())
{
_log.DebugFormat("Instance [{0}] has now been de-registered from load-balancer [{1}]", instance.InstanceId, LoadBalancerName);
}
}
}
catch (Exception exception)
{
_log.Error(exception);
}
}
return allInstances.Where(i => i.InstanceId != newInstanceId).ToList();
}
The code just freezes at this line:
var registerReponse = elbClient.RegisterInstancesWithLoadBalancer(registInstancesRequest);
When I looked in more detail at the documention (relevant documentation here) I noticed this line:
NOTE: In order for this call to be
successful, the client must have
created the LoadBalancer. The client
must provide the same account
credentials as those that were used to
create the LoadBalancer.
Is it actually possible to use the API to register new instances with an existing load balancer?
All of that is easy to implement. Use Auto Scaling. Use API.
As Roman mentions, it sounds like Auto Scaling is a good way for you to go, it may not solve all of your problems but its certainly a good starting point:
-an auto scaling group can be tied to a load balancer, e.g. ill have x healthy instances
-new instances are automatically added to the load balancer (no traffic will be sent until it passed the health check)
-you can define custom health checks, such as ping http://hostname/isalive just have your instance respond to these requests once its passes step 3
-you can define scaling policies but by default if you're over capacity the oldest instances will be killed
-you don't mention the use case of the app but if you don't want a public facing address you can use an internal load balancer that doesn't take any traffic, just looks after the health check
-where possible you should always use least privilege principles for security, with your method you're going to have to give every instance a lot of power to control other instances, whether through mistake or abuse this can go wrong very easily

Resources