Terraform error when make's terraform apply command - ansible

When I do terraform plan -var-file=../variables.tfvars
pass all good
But then I run terraform apply -var-file=../variables.tfvars
give me this error and I don't know how to solve this because the directory's path is correct.
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.mongodb_server: 1 error(s) occurred:
* Error running command 'sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo "[mongodb]
54.193.20.170" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e "mongodb_password=blahblah" -e "mongodb_user=admin" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml': exit status 127. Output: /bin/sh: 2: ansible-playbook: not found
The code is like:
resource "aws_instance" "mongodb_server" {
instance_type = "${lookup(var.mongodb_instance_type_control,
var.target_env)}"
vpc_security_group_ids =
["${aws_security_group.default_internal.id}"]
ami = "${lookup(var.amazon_ami_by_location, var.aws_region)}"
key_name = "${var.key_name}"
subnet_id = "${data.aws_subnet.subnet_a.id}"
tags {
Name = "tf-mongodb-${lookup(var.environment, var.target_env)}"
}
associate_public_ip_address = true
provisioner "local-exec" {
command = "sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo \"[mongodb]\n${aws_instance.mongodb_server.public_ip}\" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e \"mongodb_password=${var.mongodb_default_password}\" -e \"mongodb_user=${var.mongodb_default_username}\" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml"
}

Output: /bin/sh: 2: ansible-playbook: not found
This is your actual error. Terraform plan does not capture this error as local-exec commands are not evaluated by terraform plan.
Do you have ansible installed on the machine where you are trying to run the above terraform? And if installed, is it on the path.
Try installing ansible if its not installed already. If ansible is already installed, do a echo $PATH in your local-exec command and confirm if ansible is present in the given path.

Related

The EC2 user_data script dos not run

Hi i'm stuck with my user_data script because the script is no executed by the ec2 and i don't really understand why
On terraform i have
resource "aws_instance" "ec2_b" {
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "t2.micro"
subnet_id = aws_subnet.private_b.id
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
Name = "my-ec2-b"
}
key_name = "vockey"
user_data = file("./user_data.sh")
}
with the following script
#!/bin/bash
sudo echo "test" > /home/ec2-user/test.txt
sudo yum update -y
sudo yum install httpd -y
sudo systemctl start httpd
sudo systemctl enable httpd
sudo echo "Hello, World" | sudo tee /var/www/html/index.html
On aws console i can see that the script is there
However if i try to go on my ec2 with ssh i can see that the script has not been executed

Run Ansible playbook on OVH cloud instance with Terraform Cloud

I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}

How to call a variable of string with spaces in a terraform provisioner?

I am trying to run terraform provisioner which is calling my ansible playbook , now I am passing public key as a variable from user . When passing public key it doesnt take the entire key and just ssh-rsa , but not a complete string.
I want to pass the complete string as "ssh-rsa Aghdgdhfghjfdh"
The provisioner in terraform which I am running is :
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = "sleep 30 && ansible-playbook ../../../../ansible/create-user.yml --private-key ${path.module}/${var.project_name}.pem -vvv -u ubuntu -e 'username=${var.username}' -e 'user_key=${var.user_key}' -i ${var.bastion_public_ip}, -e 'root_shell=/bin/rbash' -e 'raw_password=${random_string.bastion_password.result}'"
}
}
If i run playbook alone as:
ansible-playbook -i localhost create-user.yml --user=ubuntu --private-key=kkk000.pem -e "username=kkkkk" -e 'user_key='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+GWlljlLzW6DOEo"' -e root_shell="/bin/bash"
it works,
But I want the string to be in a terraform variable which is passed in provisioner.
I want to have key copied to a file as
ssh-rsa AWRDkj;jfdljdfldkf'sd.......
and not just
ssh-rsa
You are getting bitten by the -e key=value splitting that goes on with the command-line --extra-args interpretation [citation]. What you really want is to feed -e some JSON text, to stop it from trying to split on whitespace. That will also come in handy for sufficiently complicated random string passwords, which would otherwise produce a very bad outcome when trying to pass them on the command-line.
Thankfully, there is a jsonencode() function that will help you with that problem:
resource "null_resource" "bastion_user_provisioner" {
provisioner "local-exec" {
command = <<SH
set -e
sleep 30
ansible -vvv -i localhost, -c local -e '${jsonencode({
"username"="${var.username}",
"user_key"="${var.user_key}",
"raw_password"="${random_string.bastion_password.result}",
})}' -m debug -a var=vars all
SH
}
}

Issue using Terraform EC2 Userdata

I am deploying a bunch of EC2 instances that require a mount called /data, this is a seperate disk that I am attaching using volume attach in AWS.
Now when I did the following manually it works fine, so the script I use works however when adding it via userdata I am seeing issues and the mkfs command is not happening.
If you see my terraform config:
resource "aws_instance" "riak" {
count = 5
ami = "${var.aws_ami}"
vpc_security_group_ids = ["${aws_security_group.bastion01_sg.id}","${aws_security_group.riak_sg.id}","${aws_security_group.outbound_access_sg.id}"]
subnet_id = "${element(module.vpc.database_subnets, 0)}"
instance_type = "m4.xlarge"
tags {
Name = "x_riak_${count.index}"
Role = "riak"
}
root_block_device {
volume_size = 20
}
user_data = "${file("datapartition.sh")}"
}
resource "aws_volume_attachment" "riak_data" {
count = 5
device_name = "/dev/sdh"
volume_id = "${element(aws_ebs_volume.riak_data.*.id, count.index)}"
instance_id = "${element(aws_instance.riak.*.id, count.index)}"
}
And then the partition script is as follows:
#!/bin/bash
if [ ! -d /data ];
then mkdir /data
fi
/sbin/mkfs -t ext4 /dev/xvdh;
while [ -e /dev/xvdh ] ; do sleep 1 ; done
mount /dev/xvdh /data
echo "/dev/xvdh /data ext4 defaults 0 2" >> /etc/fstab
Now when I do this via terraform the mkfs doesn't appear to happen and I see no obvious errors in the syslog. If I copy the script manually and just bash script.sh the mount is created and works as expected.
Has anyone got any suggestions here?
Edit: It's wort noting adding this in AWS gui under userdata also works fine.
You could try with remote_exec instead of user_data.
User_data relates on cloud-init which can act differently depending on images of your cloud provider.
And also i'm not sure it's a good idea to exec a script that would wait for some time before executing in the cloud-init section => this may lead to VM considering launch has failed because of a timeout (depending on your cloud provider).
Remote_exec may be better here because you will be able to wait until your /dev/xvdh is attached
See here
resource "aws_instance" "web" {
# ...
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}

Terraform, Looking for a simple way to use double quotation marks in commands?

I need a simple way of using regular quotations " in the provisioner "remote-exec" block of my terraform script. Only " will work for what I would like to do and just trying \" doesn't work. Whats the easiest way to have terraform interpret my command literally. For reference here is what I am trying to run:
provisioner "remote-exec" {
inline = [
"echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"' > /etc/default/docker",
]
}
Escaping with backslashes works fine for me:
$ cat main.tf
resource "null_resource" "test" {
provisioner "local-exec" {
command = "echo 'DOCKER_OPTS=\"-H tcp://0.0.0.0:2375\"' > ~/terraform/37869163/output"
}
}
$ terraform apply .
null_resource.test: Creating...
null_resource.test: Provisioning with 'local-exec'...
null_resource.test (local-exec): Executing: /bin/sh -c "echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2375"' > ~/terraform/37869163/output"
null_resource.test: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
...
$ cat output
DOCKER_OPTS="-H tcp://0.0.0.0:2375"

Resources