SSH Authenticity issue when using Ansible as Terraform provisioner - ansible

I am trying to use an Ansible playbook as provisioner for my Terraform project, but I get a SSH authenticity message and it hangs forever.
The authenticity of host 'xxxx' can't be established.
ECDSA key fingerprint is SHA256:xxxx.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I turned it off on my ansible.cfg file, but it doesn't seem to help.
host_key_checking = False
Any ideas on how to fix it?

Related

How to make a SSH connection using pageant on terraform for provisioning files?

How to make a SSH connection via pageant on terraform? I'm trying to provision files with file provisioner running on SSH connection. According to docs, on windows, only supported ssh agent is Pageant, but it does not explain how to configure it.
https://www.terraform.io/docs/provisioners/connection.html
Even after adding directory of PuTTY to the PATH env var (which is included in GitExtension), terraform does not seem to detect that, and keep failing to make SSH connection.
Connecting via plink.exe works, so my SSH key is correctly added to the Pageant.
plink core#<ip-address-of-host>
File provisioner works when I pass the content of private_key directly like this, but that's not what I want.
connection {
type = "ssh"
host = aws_instance.instance.public_ip
user = "core"
agent = false
private_key = file(var.private_key_path)
}
You have to set the agent parameter to true:
agent - Set to false to disable using ssh-agent to authenticate. On Windows the only supported SSH authentication agent is Pageant.
agent = true

Unable to connect to AWS instance even after manually adding in public key to authorized_keys

I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh

Ansible can't ping my vagrant box with the vagrant insecure public key

I'm using Ansible 2.4.1.0 and Vagrant 2.0.1 with VirtualBox on osx and although provisioning of my vagrant box works fine with ansible, I get an unreachable error when I try to ping with:
➜ ansible all -m ping
vagrant_django | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
}
The solutions offered on similar questions didn't work for me (like adding the vagrant insecure pub key to my ansible config). I just can't get it to work with the vagrant insecure public key.
Fwiw, here's my ansible.cfg file:
[defaults]
host_key_checking = False
inventory = ./ansible/hosts
roles_path = ./ansible/roles
private_key_file = ~/.vagrant.d/insecure_private_key
And here's my ansible/hosts file (ansible inventory):
[vagrantboxes]
vagrant_vm ansible_ssh_user=vagrant ansible_ssh_host=192.168.10.100 ansible_ssh_port=22 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
What did work was using my own SSH public key. When I add this to the authorized_keys on my vagrant box, I can ansible ping:
➜ ansible all -m ping
vagrant_django | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
I can't connect via ssh either, so that seems to be the underlying problem. Which is fixed by adding my own pub key to the vagrant box in authorized_hosts.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
PS: To clarify, although the root cause is similar to this other question, the symptoms and context are different. I could provision my box with ansible, but couldn't ansible ping it. This justifies another question imho.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
Because Vagrant insecure key is used for the initial connection to the box only. By default Vagrant replaces it with a freshly-generated key, which you’ll find in .vagrant/machines/<machine_name>/virtualbox/private_key under the project directory.
You’ll also find an automatically generated Ansible inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory, if you use Ansible provisioner in Vagrantfile, so you don't need to create your own.

cannot run ansible to AWS EC2 created by terraform

I have EC2 created by terraform, and I can login the ec2 using:
ssh -vvvv -i /home/ec2-user/.ssh/mykey.pub ec2-user#XX.XX.XX.XX without password,(XX.XX.XX.XX) is the IP of the EC2 created by terraform.
but when I try to run ansible file in terraform when ec2 is created, ansible cannot run and error message is:
aws_instance.dev (local-exec): TASK [Gathering Facts]
*********************************************************
The authenticity of host 'XX.XX.XX.XX (XX.XX.XX.XX)' can't be
established.
...
Are you sure you want to continue connecting (yes/no)?
aws_instance.dev: Still creating... (6m40s elapsed)
note the ansible yml is started after I manually force the terraform to sleep for 6m and at that time, the EC2 already started (I can login it myself, although it showed "aws_instance.dev: Still creating...") i.e.
resource "aws_instance" "dev" {
...
provisioner "local-exec" {
command = "sleep 6m && ansible-playbook -i hosts myansible.yml"
}
...
}
I run the terraform as ec2-user, I set ansible yml as:
remote_user: ec2-user
become_user: ec2-user
what is the reason the ansible cannot ssh to the EC2?
There is a message for you:
The authenticity of host 'XX.XX.XX.XX (XX.XX.XX.XX)' can't be
established.
...
Are you sure you want to continue connecting (yes/no)?
Either execute ssh-keyscan XX.XX.XX.XX before executing ansible-playbook, or disable host key checking in ansible.

Add host to known_hosts file without prompt

I am trying to add a known host to the known_hosts file using ansible
vagrant#jedi:/vagrant$ ansible web -m known_hosts -a "name=web state=present"
paramiko: The authenticity of host 'web' can't be established.
The ssh-rsa key fingerprint is afb8cf4885468badb1a7b8afc16ac211.
Are you sure you want to continue connecting (yes/no)?
I keep getting prompted as above.
I thought this module took care of this? Otherwise I should do a keyscan of web and add that to the known_hosts file.
What am I doing wrong or misunderstanding?
#udondan explained very well. Just another note:
This is the default behaivior of Ansible. It will always check the host key. If Ansible never connected to that host this prompt will appear. That is the reason you aren't able to call known_hosts module without accepting the key first.
If this is not desirable you can set host_key_checking=False on ansible.cfg.
MORE SECURE APPROACH
The best approach is set this variable: export ANSIBLE_HOST_KEY_CHECKING=False while you're deploying new servers, then remove it. unset ANSIBLE_HOST_KEY_CHECKING
Host key checking it's an important security feature.
Learn more about host_key_checking.
The module takes care of it. But there are two problems:
Since you're running the task on the target host Ansible will first try to connect to the host before it is able run your task
Also, since the task runs on the target host, you will add the fingerprint to the know_hosts file on that machine, not locally.
You would need to run the task on the local machine, not on the target machine(s).
ansible localhost -m known_hosts -a "name=web state=present"
Otherwise I should do a keyscan of web and add that to the known_hosts file.
I think you need to do that anyway, since the known_hosts module expects you to pass the key. It does not auto-detect the fingerprint.

Resources