Ansible is using the Vagrant IdentitiyFile and not the one for the user on the box? - vagrant

When running commands with ansible on a vagrant box, it is using the identity file located here:
IdentityFile="/Users/me/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty32/0/virtualbox/vagrant_private_key"
Instead of the file on the box: ~/.ssh/id_rsa
What can I do to fix this? This is my task by the way:
---
- name: Fetch the Htt Database
run_once: true
delegate_to: 543.933.824.210
remote_user: ubuntu
become_user: root
fetch: src=/home/ubuntu/file.sql.gz dest=/tmp/file.sql.bz fail_on_missing=yes

By-default vagrant use insecure_private_key to login vagrant user and that is not secure because every one know that key so if you want to use your ssh key then you can modify your Vagrantfile by adding lines
config.ssh.username = "username"
config.ssh.private_key_path = "fullpath-of-ssh-private-key"
config.ssh.insert_key = false
According to vagrant Documentation
config.ssh.insert_key - If true, Vagrant will automatically insert an
keypair to use for SSH, replacing the default Vagrant's insecure key
inside the machine if detected. By default, this is true.
This only has an effect if you don't already use private keys for
authentication or if you are relying on the default insecure key. If
you don't have to take care about security in your project and want to
keep using the default insecure key, set this to false.
also make sure you have public-key of your private-key in guest vm at /home/username/.ssh
For info you can use vagrant Documentation

Related

Using cloud-init to write_files to ~/.ssh/ breaks SSH to EC2 instance (perhaps any machine)

I'm using a Cloudformation template in YAML with an embedded cloud-init UserData to set hostname and install packages and so on, and I've found that once I include the write_files directive it will break the default SSH key on the EC2 instance i.e. it seems to interfer with whatever process AWS uses to manage authorized_files, in EC2 logs I can see fingerprints of random keys be generating, not the expected keypair.
#cloud-config
hostname: ${InstanceHostname}
fqdn: ${InstanceHostname}.${PublicDomainName}
manage_etc_hosts: true
package_update: true
package_upgrade: true
packages:
- build-essential
- git
write_files:
- path: /home/ubuntu/.ssh/config
permissions: '0600'
owner: "ubuntu:ubuntu"
content: |
Host github.com
IdentityFile ~/.ssh/git
Host *.int.${PublicDomainName}
IdentityFile ~/.ssh/default
User ubuntu
power_state:
timeout: 120
message: Rebooting to ensure hostname has stuck correctly
mode: reboot
Removing the write_files block works fine, leave it in and I cannot SSH to the host due to ssh key mismatch.
So is it due writing a file to ~/.ssh, maybe ~/.ssh/authorized_keys gets deleted? Or maybe the permissions on the directory are changed?
Appending to ~/.ssh/authorized_keys with runcmd works fine, but I'd like to use the proper write_files method for larger files
For AWS EC2 Linux instances, the SSH public key from your keypair is stored in ~/.ssh/authorized_keys. If you overwrite it with something else, make sure that you understand the implications.
The correct procedure is to append public keys from keypairs to authorized_keys AND set the correct file permissions.
If you are setting up a set of keypairs in authorized_keys, this is OK also. Make sure that you are formatting the file correctly with public keys and setting the file permissions correctly.
The file permissions should be 644 so that the SSH server can read them.
Another possible issue is that when you change authorized_keys you also need to restart the SSH server but I do see that you are rebooting the server which removes that problem.
Ubuntu example:
sudo service ssh restart

What's the use of authorize and keys options in Homestead.yaml file?

I noticed that I can provision a box, and ssh to it even after commenting out both options in Homestead.yaml, as in:
# authorize: ~/.ssh/id_rsa.pub
# keys:
# - ~/.ssh/id_rsa
Are they necessary at all? I suppose that they let me specify public/private keys for vagrant ssh, but as I understand such pair is generated by vagrant anyway (see here). What is the actual need for those settings then?
The reason I'd like to know that is that I keep running into an issue where I cannot ssh into a box as vagrant up keeps hanging up on homestead-7: SSH auth method: private key (as in this question). With authorize and keys options commented out I haven't had problem with vagrant up so far.
SSH keys are used for passwordless authentication. In order to use this, you will need to run ssh-keygen then press enter for all defaults. Once this has been generated, then Homestead will use that to ssh into the VM and run the necessary commands.
If you are running Windows 10, then you will need to install an SSH client. This could be done in various ways such as GIT Bash, Putty, OpenSSH and WSL. If you comment the lines out, then it's likely it will be logging into the machine using the default username/password combo given to the machine.

Ansible can't ping my vagrant box with the vagrant insecure public key

I'm using Ansible 2.4.1.0 and Vagrant 2.0.1 with VirtualBox on osx and although provisioning of my vagrant box works fine with ansible, I get an unreachable error when I try to ping with:
➜ ansible all -m ping
vagrant_django | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
}
The solutions offered on similar questions didn't work for me (like adding the vagrant insecure pub key to my ansible config). I just can't get it to work with the vagrant insecure public key.
Fwiw, here's my ansible.cfg file:
[defaults]
host_key_checking = False
inventory = ./ansible/hosts
roles_path = ./ansible/roles
private_key_file = ~/.vagrant.d/insecure_private_key
And here's my ansible/hosts file (ansible inventory):
[vagrantboxes]
vagrant_vm ansible_ssh_user=vagrant ansible_ssh_host=192.168.10.100 ansible_ssh_port=22 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
What did work was using my own SSH public key. When I add this to the authorized_keys on my vagrant box, I can ansible ping:
➜ ansible all -m ping
vagrant_django | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
I can't connect via ssh either, so that seems to be the underlying problem. Which is fixed by adding my own pub key to the vagrant box in authorized_hosts.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
PS: To clarify, although the root cause is similar to this other question, the symptoms and context are different. I could provision my box with ansible, but couldn't ansible ping it. This justifies another question imho.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
Because Vagrant insecure key is used for the initial connection to the box only. By default Vagrant replaces it with a freshly-generated key, which you’ll find in .vagrant/machines/<machine_name>/virtualbox/private_key under the project directory.
You’ll also find an automatically generated Ansible inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory, if you use Ansible provisioner in Vagrantfile, so you don't need to create your own.

Running Ansible playbooks on remote Vagrant box

I have one machine (A) from which I run Ansible playbooks on a variety of hosts. Vagrant is not installed here.
I have another machine (B) with double the RAM that hosts my Vagrant boxes. Ansible is not installed here.
I want to use Ansible to act on Vagrant boxes the same way I do all other hosts; that is, running ansible-playbook on machineA while targeting a virtualized Vagrant box on machineB. SSH keys are already set up between the two.
This seems like a simple use case but I can't find it clearly explained anywhere given the encouraged use of Vagrant's built-in Ansible provisioner. Is it possible?
Perhaps some combination of SSH tunnels and port forwarding trickery?
Turns out this was surprisingly simple. Vagrant in fact does not need to know about Ansible at all.
Ansible inventory on machineA:
default ansible_host=machineB ansible_port=2222
Vagrantfile on machineB:
Vagrant.configure("2") do |config|
...
config.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2222
...
end
The id: "ssh" is the important bit, as this overrides the default SSH behavior of restricting SSH to the guest from localhost only.
$ ansible --private-key=~/.ssh/vagrant-default -u vagrant -m ping default
default | SUCCESS => {
"changed": false,
"ping": "pong"
j }
(Note that the Vagrant private key must be copied over to the Ansible host and specified at the command line).

Vagrant Laravel Homestead SSH authentication failed

On vagrant provision of Laravel Homestead I get SSH authentication failed! and the Vagrant instance won't run.
SSH authentication failed! This is typically caused by the public/private
keypair for the SSH user not being properly set on the guest VM.
This seems to have started from the error when I first provisioned homestead:
==> default: tee: /home/vagrant/.ssh/authorized_keys: No such file or directory
Having the default folder mapped to /home/vagrant in the Homestead.yaml was causing the issue.
This was my folders setting:
folders:
- map: /Users/username/www
to: /home/vagrant
Adding a folder deep fixed the problem:
folders:
- map: /Users/username/www/homestead
to: /home/vagrant/www
Working :)
Perhaps someone can elaborate as to why this happens?
did you set your ssh key to the correct path?
authorize: <SSH KEY PATH TO YOUR PRIVATE KEY>
keys:
- <SSH KEY PATH TO YOUR PRIVATE KEY>
if you didnt set them right it cant connect
EDIT
You should not match your project to the root folder, because it matches the Homestead repository to your /home/vagrant/ path

Resources