Ansible does not run without password file - ansible

I have a very simple setup:
ansible-playbook -vvv playbooks/init.yml -i inventories/dev/local.hosts
ansible-playbook 2.7.8
config file = /Users/user/code/lambda/pahan/ansible/ansible.cfg
configured module search path = ['/Users/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.7.8/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.7.2 (default, Feb 12 2019, 08:15:36) [Clang 10.0.0 (clang-1000.11.45.5)]
Using /Users/user/code/lambda/pahan/ansible/ansible.cfg as config file
ERROR! The vault password file /Users/user/.vault_pass.txt was not found
The password file is not referenced anywhere and there is no passowrd used from the vault:
ansible.cfg
[defaults]
roles_path = ./roles
[ssh_connection]
control_path=~/%%h-%%r
pipelining = True
Why is it trying to access the password file anyways?

It turns out that the env variable was stuck in the environment forcing ansible to read the password file.
I was able to debug this by: ansible-config -c ansible.cfg dump
ACTION_WARNINGS(default) = True
AGNOSTIC_BECOME_PROMPT(default) = False
ALLOW_WORLD_READABLE_TMPFILES(default) = False
ANSIBLE_COW_PATH(default) = None
ANSIBLE_COW_SELECTION(default) = default
ANSIBLE_COW_WHITELIST(default) = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes
ANSIBLE_FORCE_COLOR(default) = False
ANSIBLE_NOCOLOR(default) = False
ANSIBLE_NOCOWS(default) = False
ANSIBLE_PIPELINING(/Users/user/code/lambda/pahan/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(default) = -C -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_CONTROL_PATH(/Users/user/code/lambda/pahan/ansible/ansible.cfg) = ~/%%h-%%r
ANSIBLE_SSH_CONTROL_PATH_DIR(default) = ~/.ansible/cp
ANSIBLE_SSH_EXECUTABLE(default) = ssh
ANSIBLE_SSH_RETRIES(default) = 0
ANY_ERRORS_FATAL(default) = False
BECOME_ALLOW_SAME_USER(default) = False
CACHE_PLUGIN(default) = memory
CACHE_PLUGIN_CONNECTION(default) = None
CACHE_PLUGIN_PREFIX(default) = ansible_facts
CACHE_PLUGIN_TIMEOUT(default) = 86400
...
After removing the env variable Ansible was able to run again.

Related

Run Ansible playbook on OVH cloud instance with Terraform Cloud

I have a Terraform+Ansible combination that sets up an OVH cloud instance, and then runs an Ansible playbook on it using provisioners. When I run this locally, I can supply the public and private keys directly via the command line (not using file paths), and the terraform apply works perfectly.
On Terraform Cloud, I create the keys as variables. When I run the Terraform plan, the remote-exec provisioner works, and connects to the instance as it should. However, the local-exec fails with a Permission denied (publickey). What am I missing?
My provisioner blocks:
# Dummy resource to hold the provisioner that runs ansible
resource "null_resource" "run_ansible" {
provisioner "remote-exec" {
inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
connection {
host = openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4
type = "ssh"
user = "ubuntu"
private_key = var.pvt_key
}
}
provisioner "local-exec" {
command = "python3 -m pip install --no-input ansible; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu -i '${openstack_compute_instance_v2.test_instance.network[0].fixed_ip_v4},' '--private-key=${var.pvt_key}' -e 'pub_key=${var.pub_key}' ansible/setup.yml"
}
}
Terraform cloud run error:
TASK [Gathering Facts] *********************************************************
fatal: [xx.xxx.xxx.xx]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'xx.xxx.xxx.xx' (ECDSA) to the list of known hosts.\r\nno such identity: /home/tfc-agent/.tfc-agent/component/terraform/runs/run-AhaANkduM9YXJVoC/config/<<EOT\n-----BEGIN OPENSSH PRIVATE KEY-----<private-key>-----END OPENSSH PRIVATE KEY-----\nEOT: No such file or directory\r\nubuntu#xx.xxx.xxx.xx: Permission denied (publickey).", "unreachable": true}
I solved the problem by creating (sensitive) key files on the Terraform Cloud host, and passing the paths to them to Ansible instead.
The variables are still supplied via TFCloud, but without the heredoc syntax.
I had to add an extra new line \n at the end of the key to get around it being stripped. See the following issue: https://github.com/ansible/awx/issues/9082.
resource "local_sensitive_file" "key_file" {
content = "${var.pvt_key}\n"
filename = "${path.root}/.ssh/key"
file_permission = "600"
directory_permission = "700"
}
resource "local_sensitive_file" "pubkey_file" {
content = "${var.pub_key}\n"
filename = "${path.root}/.ssh/key.pub"
file_permission = "644"
directory_permission = "700"
}

Templatefile and Bash script

I need to be able to run bash script as userdata for launchtemplate and this is how I try to do it :
resource "aws_launch_template" "ec2_launch_template" {
name = "ec2_launch_template"
image_id = data.aws_ami.latest_airbus_ami.id
instance_type = var.instance_type[terraform.workspace]
iam_instance_profile {
name = aws_iam_instance_profile.ec2_profile.name
}
vpc_security_group_ids = [data.aws_security_group.default-sg.id, aws_security_group.allow-local.id] # the second parameter should be according to the user
monitoring {
enabled = true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
encrypted = true
volume_type = "standard"
}
}
tags = {
Name = "${var.app_name}-${terraform.workspace}-ec2-launch-template"
}
#user_data = base64encode(file("${path.module}/${terraform.workspace}-script.sh")) # change the base encoder as well
user_data = base64encode(templatefile("${path.module}/script.sh", {app_name = var.app_name, env = terraform.workspace, high_threshold = var.high_threshold, low_threshold = var.low_threshold})) # change the base encoder as well
}
as you can see, I pass parameters as map in the "templatefile" function, I managed to retrieve them doing this :
#!/bin/bash -xe
# Activate logs for everything
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
# Retrieve variables from Terraform
app_name = ${app_name}
environment = ${env}
max_memory_perc= ${high_threshold}
min_memory_perc= ${low_threshold}
instance_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ami_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/ami-id)
instance_type=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-type)
scale_up_name=$${app_name}"-"$${environment}"-scale-up"
scale_down_name=$${app_name}"-"$${environment}"-scale-down"
Then, when I look at launchtemplate in AWS console, I can see that the values used in parameters are filled in :
app_name = test-app
environment = prod
max_memory_perc= 80
min_memory_perc= 40
the problem that I have is, when I run that, I get this error :
+ app_name = test-app
/var/lib/cloud/instances/scripts/part-001: line 7: app_name: command not found
I assume there is a problem with interpretation or something like that but cannot put the finger on it
any ideas ?
Thanks
As they said, it was a problem with spaces, it's fixed now
thanks

Terraform error when make's terraform apply command

When I do terraform plan -var-file=../variables.tfvars
pass all good
But then I run terraform apply -var-file=../variables.tfvars
give me this error and I don't know how to solve this because the directory's path is correct.
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.mongodb_server: 1 error(s) occurred:
* Error running command 'sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo "[mongodb]
54.193.20.170" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e "mongodb_password=blahblah" -e "mongodb_user=admin" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml': exit status 127. Output: /bin/sh: 2: ansible-playbook: not found
The code is like:
resource "aws_instance" "mongodb_server" {
instance_type = "${lookup(var.mongodb_instance_type_control,
var.target_env)}"
vpc_security_group_ids =
["${aws_security_group.default_internal.id}"]
ami = "${lookup(var.amazon_ami_by_location, var.aws_region)}"
key_name = "${var.key_name}"
subnet_id = "${data.aws_subnet.subnet_a.id}"
tags {
Name = "tf-mongodb-${lookup(var.environment, var.target_env)}"
}
associate_public_ip_address = true
provisioner "local-exec" {
command = "sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo \"[mongodb]\n${aws_instance.mongodb_server.public_ip}\" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e \"mongodb_password=${var.mongodb_default_password}\" -e \"mongodb_user=${var.mongodb_default_username}\" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml"
}
Output: /bin/sh: 2: ansible-playbook: not found
This is your actual error. Terraform plan does not capture this error as local-exec commands are not evaluated by terraform plan.
Do you have ansible installed on the machine where you are trying to run the above terraform? And if installed, is it on the path.
Try installing ansible if its not installed already. If ansible is already installed, do a echo $PATH in your local-exec command and confirm if ansible is present in the given path.

Shell provisioning in a multi-machine vagrantfile

How do I provision VMs created in a multi-machine vagrant file. I want to execute separate shell provisioning scripts in each of the machines created. I am unable figure out how vagrant facilitates this.
$kitCoreScript = <<SCRIPT
set -e
set -x
mkdir kitCoreFolder
exit
SCRIPT
$agentScript = <<SCRIPT
set -e
set -x
mkdir agentFolder
exit
SCRIPT
Vagrant.configure(2) do |config|
config.ssh.private_key_path = "rack_rsa"
config.vm.define "kitcore" do | kitcore |
kitcore.vm.provider :rackspace do |rs|
rs.username = "username"
rs.api_key = "1232134rewf324e2qede132423"
rs.admin_password = "pass1"
rs.flavor = /1 GB Performance/
rs.image = /Ubuntu 12.04/
rs.rackspace_region = :dfw
rs.server_name = "kit-core"
rs.public_key_path = "rack_rsa.pub"
end
kitcore.provision :shell, :inline => $kitCoreScript
end
config.vm.define "agents" do |agents|
agents.vm.provider :rackspace do |rs|
rs.username = "username"
rs.api_key = "2314rwef45435342543r"
rs.admin_password = "pass1"
rs.flavor = /1 GB Performance/
rs.image = /Ubuntu 12.04/
rs.rackspace_region = :dfw
rs.server_name = "agnet"
rs.public_key_path = "rack_rsa.pub"
end
agent.provision :shell, :inline => $agentScript
end
end
Apparently upon running the above vagrant script I get the below error message from vagrant.
dev-setup-scripts vagrant up
There are errors in the configuration of this machine. Please fix
the following errors and try again:
Vagrant:
* Unknown configuration section 'provision'.
Any help is highly appreciated.
You need to replace:
kitcore.provision
by
kitcore.vm.provision
Do the same for the vm called agent.

Vagrant ssh 'private_key_path` file must exist

I'm getting this error during vagrant up
There are errors in the configuration of this machine. Please fix
the following errors and try again:
SSH:
* `private_key_path` file must exist: insecure_key
How do I setup the private key in order to ssh into use vagrant ssh? I'm using Windows 7.
My vagrant file
Vagrant.configure("2") do |config|
config.vm.define "phusion" do |v|
v.vm.provider "docker" do |d|
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
d.image = "phusion/baseimage"
d.name = 'dockerizedvm'
d.has_ssh = true
#d.force_host_vm = true
end
v.ssh.port = 22
v.ssh.username = 'root'
v.ssh.private_key_path = 'insecure_key'
v.vm.provision "shell", inline: "echo hello"
#v.vm.synced_folder "./keys", "/vagrant"
end
end
So in my case I was using cygwin with windows, and I received:
* `private_key_path` file must exist:
C:\cygwin64\home\basic.user/.vagrant.d/insecure_private_key
After a few minutes of investigation I realized the VAGRANT_HOME environment variable was not ok, so exporting the right environment variable did the job:
VAGRANT_HOME=/cygdrive/c/Users/basic.user
export VAGRANT_HOME
insecure_key should be a file containing an SSH key. The file should be in the same folder where you vagrant up. The following is an alternative:
curl -o insecure_key -fSL https://github.com/phusion/baseimage-docker/raw/master/image/insecure_key
chmod 600 insecure_key
vagrant ssh

Resources