So I tried to spin up an EC2 instance using Terraform on my Mac (which is running Sierra and Terraform 0.11.5) but keep getting a few errors:
Command: terraform plan
Error: Error parsing /Users/*****/terraform/aws.tf: At 1:11: illegal char
Command: terraform show
Error: Failed to load backend: Error loading backend config: Error parsing /Users/******/terraform/aws.tf: At 1:11: illegal char
Here is what my file looks like:
provider "aws" {
region = "us-east-1"
access_key = ""
secret_key = "********"
}
resource "aws_key_pair" "nick-key" {
key_name = "nick-key"
public_key = "ssh-rsa ********************************************"
}
resource "aws_instance" "web" {
ami = "ami-1853ac65"
instance_type = "t2.micro"
key_name = "${aws_key_pair.nick-key.key_name}"
I put * in place of the real information used in the file in case anyone was wondering. Any help would be greatly appreciated!! Thank you in advance!
To answer the question but also provide feedback on how to ensure your format is correct.
As mentioned in the comment the example is missing a closing curly brace
resource "aws_instance" "web" {
ami = "ami-1853ac65"
instance_type = "t2.micro"
key_name = "${aws_key_pair.nick-key.key_name}"
}
Terraform has a validate command that will check for these formatting issues. If you run on the example above you will see
$ terraform validate
Error: Error parsing test.tf: object expected closing RBRACE got: EOF
Ensure you are calling the correct version of terraform from the terminal.
I had a parsing error like this when using terraform v11, to run scripts written for terraform v12.
Sometimes this can be easily done if you have two versions of terraform installed.
Make sure you have set up each alias in your bash profile (or appropriate shell profile file) and are using the correct command.
I tend to have the following set up in my working environment:
alias terraform='/usr/local/bin/terraform' #points to terraform 12 installation
alias terraform11='/usr/local/bin/terraform11'
Related
I am trying to configure an lambda function which will export Api backup to S3. But when i try to get an ordinary swagger backup through lambda using this script-
import boto3
client = boto3.client('apigateway')
def lambda_handler(event, context):
response = client.get_export(
restApiId='xtmeuujbycids',
stageName='test',
exportType='swagger',
parameters={
extensions: 'authorizers'
},
accepts='application/json'
)
I am getting this error-
[ERROR] NameError: name 'extensions' is not defined
Please help to resolve this issues.
Could you please check if the documentation has been explicitly published, and if it has been deployed to a stage before it available in the export.
The problem is in:
parameters={
extensions: 'authorizers'
}
You're passing a dictionary, which is ok, but the key should be a string. Since you don't have quotes around extensions, Python is trying to resolve it as a variable with the name extensions which doesn't exist in your code, and so it gives the NameError
i am trying to deploy *.sh file located in my localhost to ec2,using terraform.Note that all infrastructure i am creating via terraform.So for copy file to the remote host i am using terraform provisioner.The question is,how i can find out a private_key or password for ubuntu-user for deploying.Or maybe somebody knows different solution.The goal to run .sh file in ec2.Thanks before hand)
If you want to do it using a provisioner and you have the private key local to where Terraform is being executed, then SCSI-9's solution should work well.
However, if you can't ensure the private key is available then you could always do something like how Elastic Beanstalk deploys and use S3 as an intermediary.
Something like this.
resource "aws_s3_bucket_object" "script" {
bucket = module.s3_bucket.bucket_name
key = regex("([^/]+$)", var.script_file)[0]
source = var.script_file
etag = filemd5(var.script_file)
}
resource "aws_instance" "this" {
depends_on = [aws_s3_bucket_object.script]
user_data = templatefile("${path.module}/.scripts/userdata.sh" {
s3_bucket = module.s3_bucket.bucket_name
object_key = aws_s3_bucket_object.script.id
}
...
}
And then somewhere in your userdata script, you can fetch the object from s3.
aws s3 cp s3://${s3_bucket}/${object_key} /some/path
Of course, you will also have to ensure that the instance has permissions to read from the s3 bucket, which you can do by attaching a role to the EC2 instance with the appropriate policy.
I want to combine Ansible with Terraform so that Terraform creates the machines and Ansible will provision them. Using terraform-provisioner-ansible it's possible to bring them seamlessly together. But I saw a lack of change detection, which doesn't happen when Ansible runs standalone.
TL;DR: How can I apply changes made in Ansible to the Terraform Ansible plugin? Or at least execute the ansible plugin on every update so that Ansible can handle this itself?
Example use case
Consider this playbook which installs some packages
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
which is integrated into Terraform using the plugin
resource "libvirt_domain" "ubuntu18" {
# ...
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
will fork fine on the first run. But later I notice some package was missing
- name: Ansible install package test
hosts: all
tasks:
- name: Install cli tools
become: yes
apt:
name: "{{ tools }}"
update_cache: yes
vars:
tools:
- nnn
- htop
- vim # This is a new package
When running terraform plan I'll get No changes. Infrastructure is up-to-date. My new package vim will never got installed! So Ansible didn't run because if Ansible runs, it would install the new package.
The problem seems to be the provisioner itself:
Creation-time provisioners are only run during creation, not during updating or any other lifecycle. They are meant as a means to perform bootstrapping of a system.
But what is the correct way of applying updates? I tried a null_ressource with depends_on link to my vm ressource, but Terraform doesn't detect changes on the Ansible part, too. Seems to be a lack of change detection from the Terraform plugin.
In the doc I only found destroy time provisioners. But none for updates. I could destroy and re-create the machine. This would slow down things a lot. I like the Ansible aproach of checking what is presend and only apply changes which aren't already present, this seems a good way of provisioning.
Isn't it possible to do something similar with Terraform?
With my current experience (more Ansible than Terraform), I don't see any other way as dropping the nice plugin and execute Ansible on my own. But this would also drop the nice integration. So I need to generate inventory files on my own or even by hand (which misses the automation approach in my point of view).
source_code_hash may be an option but is inflexible: When having multiple plays/roles, I need to do this by hand for every single file which keeps error-prone easily.
Use a null_ressource with pseudo trigger
The idea from tedsmitt uses a timestamp as trigger, which seems the only way to force a provisioner. Howver running ansible-playbook plain from the CLI would create overhead of maintaining the inventory by hand. You can't call the python dynamic inventory script from here since terraform apply need to complete before
In my point of view, a better approach would be running the ansible provisioner here:
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
depends_on = ["libvirt_domain.ubuntu18"]
connection {
type = "ssh"
host = "192.168.2.2"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "ansible" {
plays {
enabled = true
become_method = "sudo"
playbook = {
file_path = "ansible-test.yml"
}
}
}
}
Only drawbag here is: Terraform will recognize a pseudo change everytime
Terraform will perform the following actions:
-/+ null_resource.ansible-provisioner (new resource required)
id: "3365240528326363062" => <computed> (forces new resource)
triggers.%: "1" => "1"
triggers.build_number: "2019-06-04T09:32:27Z" => "2019-06-04T09:34:17Z" (forces new resource)
Plan: 1 to add, 0 to change, 1 to destroy.
This seems the best compromise to me, according to other workarounds avaliable.
Run Ansible manually with dynamic inventory
Another way I found is the dynamic inventory plugin, detailled description can be found in this blog entry. It integrates into Terraform and let you specify ressources as inventory host, some example:
resource "ansible_host" "k8s" {
inventory_hostname = "192.168.2.2"
groups = ["test"]
vars = {
ansible_user = "ubuntu"
ansible_ssh_private_key_file = "~/.ssh/id_rsa"
}
}
The Python script use this information to generate a dynamic inventory, which can be used like this:
ansible-playbook -i /etc/ansible/terraform.py ansible-test.yml
A big benefit is: It keeps your configuration DRY. Terraform has the leading configuration file, no need to also maintain separate Ansible files. And also the ability for variable usage (e.g. the inventory hostname shouldn't be hardcoded for production usage as in my example).
In my use case (Provision Rancher testcluster) the null_ressource approach seems better since EVERYTHING is build with a single Terraform command. No need to additionally executing Ansible. But depending on the requirements, it can be better to keep Ansible a seperate step, so I posted this as alternative.
Installing the plugin
When trying this solution, remember that you need to install the corresponding Terraform plugin from here:
version=0.0.4
wget https://github.com/nbering/terraform-provider-ansible/releases/download/v${version}/terraform-provider-ansible-linux_amd64.zip -O terraform-provisioner-ansible.zip
unzip terraform-provisioner-ansible.zip
chmod +x linux_amd64/*
mv linux_amd64 ~/.terraform.d/plugins
And also notice, that the automated provisioner from the solution above needs to be removed first, since it has the same name (may conflict).
As you mentioned in your question, there is no change detection in the plugin. You could implement a trigger on a null_resource so that it runs on every apply.
resource "null_resource" "ansible-provisioner" {
triggers {
build_number = "${timestamp()}"
}
provisioner "local-exec" {
command = "ansible-playbook ansible-test.yml"
}
}
You can try this, It works for me.
resource "null_resource" "ansible-swarm-setup" {
local_file.ansible_inventory ]
#nhu
triggers= {
instance_ids = join(",",openstack_compute_instance_v2.swarm-cluster-hosts[*].id)
}
connection {
type = "ssh"
user = var.ansible_user
timeout = "3m"
private_key = var.private_ssh_key
host = local.cluster_ips[0]
}
}
When it detects the changes in instance index/ids then it will triger ansible playbook.
i'm running on last version of windows and i'm trying to use terraform for aws for the first time. I've created a free account everything is ready to work.
here is my test.tf
provider "aws" {
access_key = "XXXXXXXXXXXXXXXXX" // don't worry i change this
secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXX" // this too
region = "eu-west-1" #Irlande
}
resource "aws_instance" "bastion" {
ami = "ami-0d063c6b"
instance_type = "t2.micro"
}
and when i terraform plan this nothing happen :
Any solution to this issue ?
Thanks in advance
I guess you run with latest terraform.
Did you run terraform init first? If you use aws as provider, you should be fine to use s3 as backend
Take a look at Terraform init usage
I am trying out the https://github.com/chef/knife-ec2. After bundle installing the gems, i configured the knife.rb to something like this:
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "username9999"
client_key "#{current_dir}/username9999"
validation_client_name "name_aws_test-validator"
validation_key "#{current_dir}/name_aws_test-validator.pem"
chef_server_url "https://api.opscode.com/organizations/name_aws_test"
cookbook_path ["#{current_dir}/../cookbooks"]
knife[:availability_zone] = "US West (Oregon)"
#knife[:region] = "Oregon"
knife[:image] = "ami-eb99b2db"
knife[:flavour] = "t2.micro"
knife[:aws_access_key_id] = "AKXXXXXXTTTTTTXXXX"
knife[:aws_secret_access_key] = "PrabchdthsoelfmhuhgyE"
knife[:aws_ssh_key_id] = 'ec2-test'
now the knife ec2 server create -r something returns this:
ERROR: You have not provided a valid image (AMI) value
I have made sure that i am not faulting on the ami that i copied from the community ami's. So say this is the community thing:
Centos6-template-clean-hvm - ami-07d4f737
i am taking the ami as ami-07d4f737. Then due to the persistent error, i have created a new private ami for myself. It still returns the same. Any suggestions?
PS: verbosity returns nothing useful
This error could be due to one of following reasons:
You have correct AMI ID but a wrong region. Check whether the "Oregon" region has the AMI ID that you are using. Also, the region name is case-sensitive.
You have a wrong AMI ID
Probably, you do not have privileges to access this AMI, but in that case it would have said permission/Access denied kinda error.
Besides, in your knife.rb settings, the value for "Availability Zone" looks wrong. There is no such AZ called "US West (Oregon)".
For Oregon region, it is either us-west-2a or us-west-2b or us-west-2c