How to continue Vagrant/Ansible provision script from error? - vagrant

After I provision my Vagrant... I may get errors during provision... how do I restart from error, instead of doing everything from scratch ?
vagrant destroy -f && vagrant up
And I may get an error...
PLAY RECAP ********************************************************************
to retry, use: --limit #/path/to/playbook.retry
And I want to just resume from where it failed... it seems it can be done by the message... use --limit.... but when I use it in the vagrant context it doesn't work..

You can edit the Vagrantfile and include the ansible.start_at_task variable.
Then you can re-run the provision with $ vagrant reload --provision
Vagrant Reload docs
However, because Ansible plays are idempotent you don't really need to do the start_at_task. You can just re-run the provision with the reload command above.

You could do an ansible run with ansible-playbook on your vagrant box. The trick is to use the inventory file ansible has created. It is located in your .vagrant foler.
ansible-playbook playbook.yml -i vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --start-at-task='MY_TASK'
Vagrantfile: Assign your roles to a vagrant machines.
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "../playbook.yml"
ansible.groups = {
"web" => ['vm01app'],
"db" => ['vm01db'],
}
end

To start provisioning at a particular task:
vagrant ssh
provision --list-tasks
provision --start-at-task {task}
If there's anything that needs to be fixed, you can either do it while SSHed to the server, or you can make the changes in the host machine, then sync those changes to the Vagrant host.
https://developer.hashicorp.com/vagrant/docs/synced-folders
So, for instance, if you're using rsync as the sync type, you can sync the changes by running
vagrant rsync

If you used shell as provisioner instead of Ansible (for this you would need to have Ansible installed in the VM itself) you could run a command like this:
test -f "/path/to/playbook.retry" && (ansible-playbook site.yml --limit #/path/to/playbook.retry; rm -f "/path/to/playbook.retry") || ansible-playbook site.yml
This would basically check if a retry file exists, if it does it would run the playbook with that limit and delete the retry file after, if not it would run the whole playbook. Of course you would need to run these commands with sudo if you are using Vagrant, but that is easy to add.

Related

Automate server setup with Ansible SSH keypairs fails without sshpass

I'm am using Ansible and want to automate my VPS & Homelab setups. I'm running into an issue, which is the initial connection.
If I have a fresh VPS that has never been used or logged into, how can I remotely configure the node from my laptop?
ansible.cfg
[defaults]
inventory = ./inventory
remote_user = root
host_key_checking = false
ansible_ssh_common_args = "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
inventory
[homelab]
0.0.0.0 <--- actual IP here
./playbooks/add_pub_keys.yaml
---
- hosts: all
become: yes
tasks:
- name: Install public key on remote node
authorized_key:
state: present
user: root
key: "{{lookup('file','~/.ssh/homelab.pub')}}"
Command
ansible-playbook playbooks/add_public_keys.yaml
Now, this fails with permission denied, which makes sense because there is nothing that would allow connection to the remote node.
I tried adding -ask-pass to the command:
ansible-playbook playbooks/add_public_keys.yaml -ask-pass
and typing in the root password, but that fails and says I need sshpass, which is not recommended and not readily available to install on Mac due to security. How should I think about this initial setup process?
When I get issues like this I try and replicate the problem using ansible ad-hoc commands and go back to basics. It helps to prove where the issue is located.
Are you able to run ansible ad-hoc commands against your remote server using the password?
ansible -i ip, all -m shell -a 'uptime' -u root -k
If you can't, something is up with the password or possible in the ansible.cfg.

Ansible called by Vagrant does not prompt for vault password

Summary
I have a Vagrantfile provisioning a Virtualbox VM with Ansible. The Ansible playbook contains an Ansible Vault-encrypted variable. My problem is that Vagrant provisioning does not prompt for the password although I pass the option to do so.
Minimal, complete example
Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.provider "virtualbox" do |vb|
# Build a master VM for this box and clone it for individual VMs
vb.linked_clone = true
end
config.vm.box = "bento/ubuntu-16.04"
config.vm.hostname = "test-vm"
config.vm.provision :ansible do |ansible|
ansible.verbose = true
ansible.playbook = "playbook.yml"
ansible.ask_vault_pass = true
# ansible.raw_arguments = --ask-vault-pass
# ansible.raw_arguments = ["--vault-id", "#prompt"]
# ansible.raw_arguments = ["--vault-id", "dev#prompt"]
end
end
playbook.yml:
---
- name: Test
hosts: all
vars:
foo: !vault |
$ANSIBLE_VAULT;1.1;AES256
65306264626234353434613262613835353463346435343735396138336362643535656233393466
6331393337353837653239616331373463313665396431390a313338333735346237363435323066
66323435333331616639366536376639626636373038663233623861653363326431353764623665
3663636162366437650a383435666537626564393866643461393739393434346439346530336364
3639
tasks:
- name: print foo's value
debug:
msg: "foo -> {{ foo }}"
The Ansible Vault password is abc.
When I call vagrant up on first execution of the Vagrantfile or later vagrant provision I do not get the expected prompt to enter the password. Instead the task print foo's value prints the (red) message:
fatal: [default]: FAILED! => {"msg": "Attempting to decrypt but no vault secrets found"}
I also tried the outcommented alternatives in the Vagrantfile to make Ansible prompt for the password. I can see all of them in the ansible-playbook call printed by Vagrant.
Additionally, I tried several options when encrypting foo with ansible-vault encrypt_string, which did not help either.
What can I do to make Ansible prompt for the password when called with Vagrant?
Versions
kubuntu 16.04
Vagrant 1.8.1 and Vagrant 2.0.0
Ansible 2.4.0.0
Update
This is the Ansible call as printed by Vagrant:
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --ask-vault-pass --limit="default" --inventory-file=/opt/vagrantVM/.vagrant/provisioners/ansible/inventory -v playbook.yml
If I execute this directly without Vagrant the password prompt works as expected! So it must be Vagrant, which somehow suppresses the prompt.
In Ansible 2.4.0.0, the Vault password prompt (i.e. --ask-vault-pass) is skipped when no tty is present (there is no execution of getpass.getpass function).
With the Ansible 2.4.0.0 prompt implementation, the Vagrant provisioner integration don't receive an interactive prompt.
Note that --ask-pass and --ask-become-pass implementation haven't changed (i.e. there is no mechanism to skip the getpass function) and are still working fine with Vagrant.
I plan to report the issue upstream to the Ansible project, but for the moment you can resolve the situation by downgrading to Ansible 2.3 (or by using vault_password_file provisioner option instead).
References:
https://github.com/ansible/ansible/pull/22756/files#diff-bdd6c847fae8976ab8a7259d0b583f34L176
https://github.com/hashicorp/vagrant/issues/2924
See the related Bug Fix Pull Request (#31493) in Ansible project

Ansible & Vagrant - give args to ansible provision

A collegeau of mine wrote a script to automate Vagrant installations, to include Ansible scripts. So if I run ansible provision, the playbook ansible/playbooks/provision.yml` is run at the vagrant machine(s).
The downside of this script is the Ansible playbook will only deploy on the machine with ansible provision.
Now, as I'm writing code and working, I am noticing the downsides. Because I can give ansible-playbook parameters / arguments, such asansible-playbook -i inventory provision.yml -vvv --tags "test". But this is not possible because of an architectual problem.
So, instead of solving the real problem (which I try to evade), are there any guru's out there, who can point me in the right directoin, to make it possible to give ansible provision arguments? E.g. ansible provision -vvv.
I looked at https://www.vagrantup.com/docs/cli/provision.html but without help.
Thanks.
Not completly sure I have understood correctly but maybe this config (from one of my projects), in vagrantfile, could help :
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/playbook.yml"
ansible.limit = 'all'
ansible.tags = 'local'
ansible.sudo = true
ansible.verbose = 'v'
ansible.groups = {
"db" => ["db"],
"app" => ["app"],
"myproject" => ["myproject"],
"fourth" => ["fourth"],
"local:children" => ["db", "app", "myproject", "fourth"]
}
end
In this Vagrantfile, I configured 4 VM vagrant.
vagrant_ansible_inventory looks like this :
# Generated by Vagrant
db ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
app ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
myproject ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
fourth ansible_ssh_host=127.0.0.1 ansible_ssh_port=2202 ansible_ssh_private_key_file=/home/user/.vagrant.d/insecure_private_key
[db]
db
[app]
app
[myproject]
myproject
[fourth]
fourth
[local:children]
db
app
myproject
fourth
https://www.vagrantup.com/docs/provisioning/ansible_local.html

Running Ansible against host group

When I try running this Ansible command - ansible testserver -m ping it works just fine, but when I try this command - ansible webservers -m ping I get the following error - ERROR! Specified hosts options do not match any hosts.
My host file looks like this -
[webservers]
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
What could be the problem? Why can ansible recognize the host in question and not the host group?
I've tried changing the file to make sure ansible is reading from this file specifically, and made sure this is the case, so this is not a problem of reading configurations from another file I am not aware of.
I've also tried using the solutions specified in Why Ansible skips hosts group and does nothing but it seems like a different problem with a different solution.
EDIT - added my anisble.cfg file, to point out I've already made all the vagrant specific configurations.
[defaults]
inventory = ./ansible_hosts
roles_path = ./ansible_roles
remote_user = vagrant
private_key_file = .vagrant/machine/default/virtualbox/private_key
host_key_checking = False
I think you are working with the vagrant and you need to ping like this:
ansible -i your-inventory-file webservers -m ping -u vagrant -k
Why your ping fail prevously:
ansible try to connect to vagrant machine using local login user and it doesn't exist on the vagrant machine
it also need password for the vagrant user which is also vagrant.
Hope that help you.

Does Vagrant create the /etc/sudoers.d/vagrant file, or does it need to be added manually to use ansible sudo_user?

Some ubuntu cloud images such as this one :
http://cloud-images.ubuntu.com/vagrant/precise/20140120/precise-server-cloudimg-amd64-vagrant-disk1.box
have the file /etc/sudoers.d/vagrant, with the content vagrant ALL=(ALL) NOPASSWD:ALL
Other boxes such as this one https://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-13.10_chef-provisionerless.box
doesn't have it, and as a result ansible commands with sudo_user don't work.
I can add the file with :
- name: ensure Vagrant is a sudoer
copy: content="vagrant ALL=(ALL) NOPASSWD:ALL" dest=/etc/sudoers.d/vagrant owner=root group=root
sudo: yes
I'm wondering if something Vagrant should be doing, because this task
will not be applicable when running the ansible playbook on a real (non vagrant) machine.
Vagrant requires privileged access to the VM, either using config.ssh.username = "root", or more commonly, via sudo. The Bento Ubuntu boxes currently configure it directly to /etc/sudoers.
I don't know what ansible's sudo_user does or means, but I guess your provisioning is overriding /etc/sudoers. In this case you really need to ensure you don't lose Vagrant's sudo access to the VM. Or build your own base box which uses sudoers.d.
As a side note, if you create a /etc/sudoers.d/ file, you should also set it's mode to 0440 or at least some older sudo versions refuse to apply it.

Resources