I'm trying to set up something really simple with Terraform, but it gives me an error I haven't seen before.
When I run terraform validate -var-file=secrets.tfvars I get the following error:
Error loading files open /home/MYUSER/Documents/git/packer-with-terraform/terratest/-var-file=secrets.tfvars: no such file or directory
And when I run terraform plan -var-file=secrets.tfvars I get this:
invalid value "secrets.tfvars" for flag -var-file: Error decoding Terraform vars file: At 1:10: root.variable: unknown type for string *ast.ObjectList
I have three files within the same folder, and their content is minimal:
providers.tf
provider "aws" {
region = "us-west-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
main.tf
resource "aws_instance" "master_proxy" {
ami = "ami-123sample"
instance_type = "t2.micro"
}
secrets.tfvars
variable "access_key" { default = "sampleaccesskey" }
variable "secret_key" { default = "samplesecretkey" }
If I set access_key and secret_key directly, and not via variables, then it works. A similar setup with secrets-files and whatnot works on another project of mine; I just don't understand what's wrong here.
Firstly, terraform validate validates a folder of .tf files to check that the syntax is correct. You can't pass a separate vars file to the command. In fact, terraform validate won't even check your variables are even set properly.
Secondly, your secrets.tfvars file is using the wrong syntax. Instead you want it to look more like this:
secrets.tfvars:
access_key = "sampleaccesskey"
secret_key = "samplesecretkey"
But this will error because you haven't actually defined the variables in a .tf file:
providers.tf
variable "access_key" { default = "sampleaccesskey" }
variable "secret_key" { default = "samplesecretkey" }
provider "aws" {
region = "us-west-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
If you don't have a sensible default for a variable (such as typically in this case) then you can remove the default argument to the variable and this will make Terraform error on the plan because a required variable is not set:
providers.tf
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
region = "us-west-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
Well, I messed up big time. I somehow managed to forget the supposed structure (and difference) of *.tf and *.tfvars files.
For those who might run into a similar problem later on:
*.tf files are for configuration and declaration, which means that any variables must be defined within a *.tf file.
*.tfvars files are for giving values to already defined variables. These files can be passed with the -var-file flag (which I had misused).
# Set a Provider
provider "aws" {
region = "${var.region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_security_group" "test-server-sg" {
name = "test-server-sg"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "test-server" {
ami = "${var.ami}"
instance_type = "${var.instance_type}"
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -fp 8080 &
EOF
tags {
name = "Test Web Server"
environment = "${var.environment}"
project = "${var.project}"
}
}
variable "region" {
type = "string"
description = "AWS region"
}
variable "access_key" {
type = "string"
description = "AWS access key"
}
variable "secret_key" {
type = "string"
description = "AWS secret key"
}
variable "ami" {
type = "string"
description = "AWS image id"
}
variable "instance_type" {
type = "string"
description = "AWS instance type"
}
variable "environment" {
type = "string"
description = "AWS environment name"
}
variable "project" {
type = "string"
description = "AWS project name"
}
output "Test Server Public DNS" {
value = "${aws_instance.test-server.public_dns}"
}
output "Test Server Public IP" {
value = "${aws_instance.test-server.public_ip}"
}
region = "us-east-1"
access_key = "put your aws access key here"
secret_key = "put your aws secret key here"
ami = "ami-40d28157"
instance_type = "t2.micro"
environment = "Test"
project = "Master Terraform"
Related
How to upload local file to the ec2 instance with the module terraform-aws-modules/ec2-instance/aws?
I placed provisioner inside module "ec2". It does not work.
I placed provisioner outsite of the module "ec2". It does not work either.
I got the error: "Blocks of type "provisioner" are not expected here".
"provisioner" is inside module "ec2". It does not work.
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
}
"provisioner" is outsite of the module "ec2". It does not work.
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
}
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
You can use a null ressource to make it work!
resource "null_resource" "this" {
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
You can provision files on an EC2 instance with the YAML cloud-init syntax which is passed to the EC2 instance as user-data. Here is an example of passing cloud-init config to EC2.
cloud-init.yaml file:
#cloud-config
# vim: syntax=yaml
#
# This is the configuration syntax that the write_files module
# will know how to understand. Encoding can be given b64 or gzip or (gz+b64).
# The content will be decoded accordingly and then written to the path that is
# provided.
#
# Note: Content strings here are truncated for example purposes.
write_files:
- content: |
# Your TXT file content...
# goes here
path: /home/ec2-user/foo.txt
owner: ec2-user:ec2-user
permissions: '0644'
Terraform file:
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
user_data = file("./cloud-init.yaml")
}
The benefits of this approach over the approach in the accepted answer are:
This method creates the file immediately at instance creation, instead of having to wait for the instance to come up first. The null-provisioner/SSH connection method has to wait for the EC2 instance to be become available, and the timing of that could cause your Terraform workflow to become flaky.
This method doesn't require the EC2 instance to be reachable from your local computer that is running Terraform. You could be deploying the EC2 instance to a private subnet behind a load balancer, which would prevent the null-provisioner/SSH connect method from working.
This doesn't require you to have the SSH key for the EC2 instance available on your local computer. You might want to only allow AWS SSM connect to your EC2 instance, to keep it more secure than allowing SSH directly from the Internet, and that would prevent the null-provisioner/SSH connect method from working. Further, storing or referencing an SSH private key in your Terraform state adds a risk factor to your overall security profile.
This doesn't require the use of a null_resource provisioner, which the Terraform documentation states:
Important: Use provisioners as a last resort. There are better alternatives for most situations. Refer to Declaring Provisioners for more details.
Having an issue with a Terraform deployment. I have a module that creates a network, and then another module that creates a series of ec2 instances. These servers are required to have specific IP addresses, which are called out in the module (I would rather dynamically set these but for now they are 'hardcoded'). However, I am getting a warning that the IP address I am associating with the ec2 instance 'does not fall within the subnet's address range', but it is. Here is the basic breakdown:
servers
->main.tf
->variables.tf
->outputs.tf
network
->main.tf
->variables.tf
->outputs.tf
main.tf
The relevant bits are as follows:
network main.tf
# Create VPC
resource "aws_vpc" "foo" {
cidr_block = "192.168.1.0/24"
enable_dns_hostnames = "true"
enable_dns_support = "true"
tags = {
Name = "foo"
}
}
# Create a Subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.foo.id
cidr_block = "192.168.1.0/24"
availability_zone = "ca-central-1a"
tags = {
Name = "subnet-1"
}
}
servers main.tf
resource "aws_instance" "bar" {
ami = var.some_ami
instance_type = "t3.medium"
associate_public_ip_address = true
private_ip = "192.168.1.15"
# root disk
root_block_device {
volume_size = "60"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = "bar"
}
}
main.tf
module "network" {
source = "./network"
}
module "servers" {
source = "./servers"
subnet_id = module.network.aws_subnet
}
Everything works correctly, and I verified in AWS that the VPC is created, and the subnet is created, but for some reason when the server is getting created I get the following error:
│ Error: creating EC2 Instance: InvalidParameterValue: Address 192.168.1.15 does not fall within the subnet's address range status code: 400
I left out some of the irrelevant bits of the tfs but everything else works as expected except this one thing. Anyone know whats going on?
Your aws_instance resource does not have subnet_id attribute. So instances are being launched in default subnet.
Add subnet_id attribute as below
resource "aws_instance" "bar" {
ami = var.some_ami
instance_type = "t3.medium"
associate_public_ip_address = true
subnet_id = "your_subnet_id"
private_ip = "192.168.1.15"
# root disk
root_block_device {
volume_size = "60"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
}
tags = {
Name = "bar"
}
}
You could also use data resource to get the subnet id.
data "aws_subnet" "selected" {
filter {
name = "tag:Name"
values = ["myawesomesubnet"]
}
}
I'm aware that there already exists several posts similar to this one - I've went through them and adapted my Terraform configuration file, but it makes no difference.
Therefore, I'd like to publish my configuration file and my use case: I'd like to provision a (Windows) Virtual Machine on AWS, using Terraform. It works without the File Provisioning part - including them, the provisioning results in a timeout.
This includes adaptations from previous posts:
SSH connection restriction
SSH isnt working in Windows with Terraform provisioner connection type
Usage of a Security group
Terraform File provisioner can't connect ec2 over ssh. timeout - last error: dial tcp 92.242.xxx.xx:22: i/o timeout
I also get a timeout when using "winrm" instead of "ssh".
I'd be happy if you could provide any hint for following config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
access_key = "<my access key>"
secret_key = "<my secret key>"
region = "eu-central-1"
}
resource "aws_instance" "webserver" {
ami = "ami-07dfec7a6d529b77a"
instance_type = "t2.micro"
security_groups = [aws_security_group.sgwebserver.name]
key_name = aws_key_pair.pubkey.key_name
tags = {
"Name" = "WebServer-Win"
}
}
resource "null_resource" "deployBundle" {
connection {
type = "ssh"
user = "Administrator"
private_key = "${file("C:/Users/<my user name>/aws_keypair/aws_instance.pem")}"
host = aws_instance.webserver.public_ip
}
provisioner "file" {
source = "files/test.txt"
destination = "C:/test.txt"
}
depends_on = [ aws_instance.webserver ]
}
resource "aws_security_group" "sgwebserver" {
name = "sgwebserver"
description = "Allow ssh inbound traffic"
ingress {
from_port = 0
to_port = 6556
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sgwebserver"
}
}
resource "aws_key_pair" "pubkey" {
key_name = "aws-cloud"
public_key = file("key/aws_instance.pub")
}
resource "aws_eip" "elasticip" {
instance = aws_instance.webserver.id
}
output "eip" {
value = aws_eip.elasticip.public_ip
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}
Thanks a lot in advance!
Windows EC2 instances don't support SSH, they support RDP. You would have to install SSH server software on the instance before you could SSH into it.
I suggest doing something like placing the file in S3, and using a user data script to trigger the Windows EC2 instance to download the file on startup.
I'm trying to set up a Terraform script to deploy a windows server. When running terraform apply I get an error message referencing below
Error: Invalid reference
on main.tf line 44, in resource "aws_instance" "server":
44: password = "${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}"
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
AFAIK the resource is "aws_instance", the name is "server[0]" while the attribute is the "password_data". I know I'm missing something but don't know what. any assistance would be appreciated.
The full resource module is below in case there is something I'm missing contained in there.
Thanks
resource "aws_instance" "server" {
ami = var.AMIS[var.AWS_REGION]
instance_type = var.AWS_INSTANCE
vpc_security_group_ids = [module.networking.security_group_id_out]
subnet_id = module.networking.subnet_id_out
## Use this count key to determine how many servers you want to create.
count = 1
key_name = var.KEY_NAME
tags = {
# Name = "Server-Cloud"
Name = "Server-${count.index}"
}
root_block_device {
volume_size = var.VOLUME_SIZE
volume_type = var.VOLUME_TYPE
delete_on_termination = true
}
get_password_data = true
provisioner "remote-exec" {
connection {
host = coalesce(self.public_ip, self.private_ip)
type = "winrm"
## Need to provide your own .pem key that can be created in AWS or on your machine for each provisioned EC2.
password = ${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}
}
inline = [
"powershell -ExecutionPolicy Unrestricted C:\\Users\\Administrator\\Desktop\\installserver.ps1 -Schedule",
]
}
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ../public_ips.txt"
}
}
Use password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
without user = "admin" as below :
resource "aws_instance" "windows_server" {
get_password_data = "true"
connection {
host = "${self.public_ip}"
type = "winrm"
https = false
password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
agent = false
insecure = "true"
}
}
I want to run a metadata_startup_script when using Terraform to create a GCE instance.
This script is supposed to create a user and assign to this user a random password.
I know that I can create a random string in Terraform with something like:
resource "random_string" "pass" {
length = 20
}
And my startup.sh will at some point be like:
echo myuser:${PSSWD} | chpasswd
How can I chain the random_string resource generation with the appropriate script invocation through the metadata_startup_script parameter?
Here is the google_compute_instance resource definition:
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${file("./startup.sh")}"
}
where startup.sh includes the above line setting the password non-interactively.
If you want to pass a Terraform variable into a templated file then you need to use a template.
In Terraform <0.12 you'll want to use the template_file data source like this:
resource "random_string" "pass" {
length = 20
}
data "template_file" "init" {
template = "${file("./startup.sh")}"
vars = {
password = "${random_string.pass.result}"
}
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${data.template_file.startup_script.rendered}"
}
and change your startup.sh script to be:
echo myuser:${password} | chpasswd
Note that the template uses ${} for interpolation of variables that Terraform is passing into the script. If you need to use $ anywhere else in your script then you'll need to escape it by using $$ to get a literal $ in your rendered script.
In Terraform 0.12+ there is the new templatefile function which can be used instead of the template_file data source if you'd prefer:
resource "random_string" "pass" {
length = 20
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = templatefile("./startup.sh", {password = random_string.pass.result})
}
As an aside you should also notice the warning on random_string:
This resource does use a cryptographic random number generator.
Historically this resource's intended usage has been ambiguous as the original example used it in a password. For backwards compatibility it will continue to exist. For unique ids please use random_id, for sensitive random values please use random_password.
As such you should instead use the random_password resource:
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}