Add host to known_hosts file without prompt - ansible

I am trying to add a known host to the known_hosts file using ansible
vagrant#jedi:/vagrant$ ansible web -m known_hosts -a "name=web state=present"
paramiko: The authenticity of host 'web' can't be established.
The ssh-rsa key fingerprint is afb8cf4885468badb1a7b8afc16ac211.
Are you sure you want to continue connecting (yes/no)?
I keep getting prompted as above.
I thought this module took care of this? Otherwise I should do a keyscan of web and add that to the known_hosts file.
What am I doing wrong or misunderstanding?

#udondan explained very well. Just another note:
This is the default behaivior of Ansible. It will always check the host key. If Ansible never connected to that host this prompt will appear. That is the reason you aren't able to call known_hosts module without accepting the key first.
If this is not desirable you can set host_key_checking=False on ansible.cfg.
MORE SECURE APPROACH
The best approach is set this variable: export ANSIBLE_HOST_KEY_CHECKING=False while you're deploying new servers, then remove it. unset ANSIBLE_HOST_KEY_CHECKING
Host key checking it's an important security feature.
Learn more about host_key_checking.

The module takes care of it. But there are two problems:
Since you're running the task on the target host Ansible will first try to connect to the host before it is able run your task
Also, since the task runs on the target host, you will add the fingerprint to the know_hosts file on that machine, not locally.
You would need to run the task on the local machine, not on the target machine(s).
ansible localhost -m known_hosts -a "name=web state=present"
Otherwise I should do a keyscan of web and add that to the known_hosts file.
I think you need to do that anyway, since the known_hosts module expects you to pass the key. It does not auto-detect the fingerprint.

Related

How to build Ansible Playbook without username/password

I am new to Ansible and started learning and working on Ansible Playboks especially on network automation. Part of our hosting infra, inorder to login to any device we have default script runs to ssh into the device, something like goto . Hence no need to give any username and password, it directly logs into the device.
How we can include this customization in Ansible playbook without using any username or password.
Ansible supports using ssh keys.
Confirm that you can connect using SSH to all the nodes in your inventory using the same username. If necessary, add your public SSH key to the authorized_keys file on those systems.
Refer to documentation here
Also, it is a good idea to read the 'getting started' page
You will still need to supply a Username, that the SSH Key belongs to:
Guide on Setting up an SSH key for a Linux User: Here
Once SSH Key is configured and Copied over to your Ansible Server:
Edit the Sudoers File on the Slave Node and set NOPASSWD for the user, that way your user won't be prompted for a password when you are duing Sudo Commands: Reference Here

Using cloud-init to write_files to ~/.ssh/ breaks SSH to EC2 instance (perhaps any machine)

I'm using a Cloudformation template in YAML with an embedded cloud-init UserData to set hostname and install packages and so on, and I've found that once I include the write_files directive it will break the default SSH key on the EC2 instance i.e. it seems to interfer with whatever process AWS uses to manage authorized_files, in EC2 logs I can see fingerprints of random keys be generating, not the expected keypair.
#cloud-config
hostname: ${InstanceHostname}
fqdn: ${InstanceHostname}.${PublicDomainName}
manage_etc_hosts: true
package_update: true
package_upgrade: true
packages:
- build-essential
- git
write_files:
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: "ubuntu:ubuntu"
content: |
Host github.com
IdentityFile ~/.ssh/git
Host *.int.${PublicDomainName}
IdentityFile ~/.ssh/default
User ubuntu
power_state:
timeout: 120
message: Rebooting to ensure hostname has stuck correctly
mode: reboot
Removing the write_files block works fine, leave it in and I cannot SSH to the host due to ssh key mismatch.
So is it due writing a file to ~/.ssh, maybe ~/.ssh/authorized_keys gets deleted? Or maybe the permissions on the directory are changed?
Appending to ~/.ssh/authorized_keys with runcmd works fine, but I'd like to use the proper write_files method for larger files
For AWS EC2 Linux instances, the SSH public key from your keypair is stored in ~/.ssh/authorized_keys. If you overwrite it with something else, make sure that you understand the implications.
The correct procedure is to append public keys from keypairs to authorized_keys AND set the correct file permissions.
If you are setting up a set of keypairs in authorized_keys, this is OK also. Make sure that you are formatting the file correctly with public keys and setting the file permissions correctly.
The file permissions should be 644 so that the SSH server can read them.
Another possible issue is that when you change authorized_keys you also need to restart the SSH server but I do see that you are rebooting the server which removes that problem.
Ubuntu example:
sudo service ssh restart

Unable to connect to AWS instance even after manually adding in public key to authorized_keys

I am unable to run an ansible-playbook or use ansible ping on a AWS instance. However, I can ssh into the instance with no problem. My hosts file is this:
[instance]
xx.xx.xxx.xxx ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/home/josh/Ansible/Amazon/AWS.pem
Should I not use a direct path. I am trying to use ansible to install apache onto the server. In my security group in the AWS console, I allowed all incoming ssh traffic in port 22, and ansi
service: name=apache2 state=started`ble tries to ssh through port 22 so that should not be the problem. Is there some crucial idea behind sshing into instances that I didn't catch onto to. I tried following this post: Ansible AWS: Unable to connect to EC2 instance but to no avail.
make sure inside ansible.cfg ***
private_key_file = path of private key(server-private-key)
and in host machine don't change default authorized_keys file ,better way is create one user, for that user create .ssh directory and then inside create a file called authorized_keys & paste your server-public key
$~/.ssh/authorized_keys
try: ansible-playbook yourplaybookname.yml --connection=local
ansible defaults to ssh

Vagrant, Ansible, and SSH forwarding

I'm trying to use Vagrant and Ansible to create a developer VM environment. I'm able to connect just fine and install packages. My issue seems to be with ssh, git, and keyfiles. My setup is unfortunately rather complicated, and I don't have the ability to change that. The git repositories are hosted on a machine that I have to connect to via a bastion host with a keyfile.
My local ssh config file has all the necessary proxy commands to make this work. I have SSH forwarding my key, because I can log into the VM manually and use git. Via Ansible it doesn't seem to know about hosts that should be setup via the ssh config file.
I am not running the git clone as sudo, and I am using accept_hostkey. It just doesn't seem to know about the repository host at all.
I have also tried adding an ansible.cfg with the following command:
ssh_args = -o ControlPersist=15m -F ssh.config -q
The ssh.config file is the same as my ~/.ssh/config that happens to work when doing the git clones manually. I'm also doing this as the vagrant user manually, and I have remote_user set to vagrant in my playbook.
I'm just kind of stumped as to how this is supposed to work.
If I understand correctly, you can do it manually git clone into your vagrant machine?
If yes, then you can do like this, as you have already told us that the both machine has exactly the same ~/.ssh/config file, then you can do like this which I did during the git clone, when I got error:
- name: Pull sources from the repository.
git: repo='git#bitbucket.org:test/test.git' version=master dest=/var/www accept_hostkey=True force=yes recursive=no key_file=~/.ssh/id_rsa
Sometime, explicitly defined the key_file, accept_hostkey=True and force=yes solve the problem.
On the other hand, if you want to explicit define that always us the ssh connection instead of paramiko, then you can set into your ansible.cfg file, which is located at /etc/ansible/ansible.cfg
[defaults]
transport=ssh
There is another technique that I have read somewhere, you can also try that please to teach Ansible to talk to Git server on your behalf (again this change is in /etc/ansible/ansible.cfg)
[ssh_connection]
ssh_args = -o ForwardAgent=yes
Hope this will help you. Thanks
I'm not too familiar with Ansible but from docs, Ansible supports 2 ssh transports: OpenSSH, Paramiko (Python's SSH).
Unless you manually choose which one to use, it might choose Paramiko instead of OpenSSH.
This can explain the troubles you are having, since ssh_args is OpenSSH specific setting.
So the issue turned out to be that I was actually running one of my git clones as root after all.
For the SSH key to be forwarded properly in that case, you have to edit /etc/sudoers (with visudo) and update env_keep so that SSH_AUTH_SOCK is preserved.

use ssh private key from host in vagrant guest

I want to clone a bunch of private git repositories while provisioning a vagrant box. According to this article this should be possible using config.ssh.forward_agent = true. However, when trying to connect to github via something like ssh -T git#github.com -o StrictHostKeyChecking=no it fails with the following error:
Warning: Permanently added 'github.com,192.30.252.130' (RSA) to the list of known hosts.
Permission denied (publickey).
I cut my configuration down to the simplest possible configuration. You can find it here: https://gist.github.com/TomTasche/31f7c45fcffc2997d43a
When I do "vagrant ssh" and try the same again, a similar error occurs:
Cloning into 'private-repositories'...
Warning: Permanently added the RSA host key for IP address '192.30.252.130' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Edit: the configuration linked above does work on a host running Ubuntu, but does neither work on a Mac host, nor on a Windows host. My goal is to have a configuration that works on all these three hosts.
Please check whether your host system has ssh-agent forwarding enabled. You can do so for example by adding this block to your ~/.ssh/config file:
Host *
ForwardAgent yes
If this is enabled vagrant ssh (and also vagrant provision) should be able to forward your key to the guest machine.
You also might want to check using ssh-add -l whether your ssh-agent does know about your SSH-key. If it is in the list and you have agent-forwarding activated you should have a success. Otherwise you can add the key to your ssh-agent by running ssh-add <path to your key file>.
It sounds like you may be hitting this particular bug:
https://github.com/mitchellh/vagrant/issues/1735
(Despite it being "closed" it's actually not fixed)
On Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh).
However, there is a workaround or simple hack. You can auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
Tom,
What you're doing is fairly generic in nature and I don't think is Vagrant specific.
Try some of the following to track down the issue:
edit your /etc/ssh/sshd_config
Set LogLevel debug
Restart the sshd service sudo service sshd restart or /etc/init.d/sshd restart
tail -f /var/log/authlog -- note, the file may be something else like /var/log/authd.log or /var/log/secure or something.
Watch what happens when you connect. It should give you some indication of why it's failing.
Again sorry, I'm not that familiar with Vagrant but I'm wondering if the provisioning script is running as another user, in which case the agent forwarding may not work as expected?

Resources