Ansible & Vagrant - apt-get: command not found - vagrant

I am new with using Vagrant and Ansible. Currently I am stuck on Ansible telling me that it can't find apt-get command.
My Vagrant box runs on Ubuntu and here are the relevant files:
// Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :private_network, :ip => "192.168.33.10"
# make sure apt repo is up to date
config.vm.provision :shell, :inline => 'apt-get -qqy update'
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
end
// vagrant_ansible_inventory_default
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 127.0.0.1
user: root
# remote_user: user
# sudo: yes
roles:
- nginx
// roles/nginx/tasks/main.yml
---
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
When I run vagrant provision, I get
[default] Running provisioner: shell...
[default] Running: inline script
stdin: is not a tty
[default] Running provisioner: ansible...
PLAY [Install MySQL, Nginx, Node.js, and Monit] *******************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [nginx | Installs nginx web server] *************************************
failed: [127.0.0.1] => {"cmd": "apt-get update && apt-get install python-apt -y -q",
"failed": true, "item": "", "rc": 127}
stderr: /bin/sh: apt-get: command not found
msg: /bin/sh: apt-get: command not found
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/foosbar/playbook.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
What am I missing?

What happens if you run the play against the IP address you've actually assigned to vagrant?
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 192.168.33.10
The hosts on which you want to run the play is the vagrant. hosts in this case doesn't refer to the master, but the nodes.

Related

trying to provison vagrant manually and getting error

I am trying to provision vagrant with the playbook I wrote but when I do im getting an error about the host.
this is my Vagrant file:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
#config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.network "public_network", ip: "192.168.56.4", bridge: ["en0: Wi-Fi (Wireless)"]
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
this is my playbook.yml:
---
- name: Playbook to install and use Vault
become: true
hosts: server1
#gather_facts: no
tasks:
- name: Uptade1
become: true
become_user: root
shell: uname -a
register: bla
- debug:
msg: "{{ bla.stdout }}"
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
this is my inventory:
all:
hosts:
server1:
ansible_host: 192.168.56.4
the error I get while using this command - ansible-playbook -i /Users/idan/carebox/inventory.yml playbook.yml is:
fatal: [server1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.56.4 port 22: Operation timed out", "unreachable": true}
PLAY RECAP *********************************************************************
server1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
I tried changing the host ip in the inventory to 127.0.0.1 and it says the same but with "connection refused"
or 0.0.0.0 but it keeps giving this error.
I also tried making the ip in Vagrntfile to "private_key" and It also didn't work.
it only works if I am provisioning in the vagrant up but I want to do it manually after I start vagrant, with the command I wrote here above.
I would really like some help, thank you!

Trying to provision my Vagrant with Ansible

I am trying to provision a virtual machine with an Ansible playbook.
Following the documentation, I ended up with this simple Vagrant File:
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "private_network", ip: "192.168.50.5"
config.vm.provision "ansible" do |ansible|
ansible.verbose = "vvv"
ansible.playbook = "playbook.yml"
end
end
As you can see, I am trying to provision a xenial64 machine (Ubuntu 16.04) from a playbook.yml file.
When I launch vagrant provision, here is what I get:
$ vagrant provision
==> default: Running provisioner: ansible...
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/mmarteau/Code/ansible-arc/.vagrant/provisioners/ansible/inventory -vvv playbook.yml
Using /etc/ansible/ansible.cfg as config file
statically included: /home/mmarteau/Code/ansible-arc/roles/user/tasks/ho-my-zsh.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/nginx.yml
statically included: /home/mmarteau/Code/ansible-arc/roles/webserver/tasks/php.yml
statically included: /etc/ansible/roles/geerlingguy.composer/tasks/global-require.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-RedHat.yml
statically included: /etc/ansible/roles/geerlingguy.nodejs/tasks/setup-Debian.yml
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY RECAP *********************************************************************
So my file seems to be read because I get some statically included from roles into my playbook.yml file.
However, the script stops very quickly, and I don't have any information to debug or to see any errors.
How can I debug this process?
EDIT: More info
Here is my playbook.yml file:
---
- name: Installation du serveur
# hosts: web
hosts: test
vars:
user: mmart
apps:
dev:
branch: development
domain: admin.test.dev
master:
branch: master
domain: admin.test.fr
bitbucket_repository: git#bitbucket.org:Test/test.git
composer_home_path: '/home/mmart/.composer'
composer_home_owner: mmart
composer_home_group: mmart
zsh_theme: agnoster
environment_file: arc-parameters.yml
ssh_agent_config: arc-ssh-config
roles:
- apt
- user
- webserver
- geerlingguy.composer
- geerlingguy.nodejs
- deploy
- deployer
...
Here is my host file:
[web]
XX.XX.XXX.XXX ansible_ssh_private_key_file=/somekey.pem ansible_become=true ansible_user=ubuntu
[test]
Here is the generated host file from vagrant in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory :
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='ubuntu' ansible_ssh_private_key_file='/home/mmart/Code/ansible-test/.vagrant/machines/default/virtualbox/private_key'
Is this correct? Shouldn't the ansible_ssh_user be set to vagrant ?
In your playbook use the default as hosts, as vagrant by default will only create an inventory element for that particular host:
---
- name: Installation du serveur
hosts: default
(...)

How to use Ansible with Vagrant locally?

I followed the official tutorial from: http://docs.ansible.com/ansible/guide_vagrant.html
My Vagrantfile looks like:
VAGRANTFILE_API_VERSION = '2'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = 'bento/centos-7.2'
# SSH default root user
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
# Networking
# config.vm.network :private_network, ip: '192.168.33.16'
# Provisioning
config.vm.provision :ansible do |ansible|
ansible.playbook = 'playbooks/main.yml'
end
end
And my test.sh script:
ansible-playbook \
--private-key=.vagrant/machines/default/virtualbox/private_key \
-u vagrant \
-i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
playbooks/main.yml \
$#
But when I run the script, I receive the following error:
fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
to retry, use: --limit #playbooks/main.retry
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=1 failed=0
How to test ansible-playbook with Vagrant?
If you want to run ansible against host then Vagrant already has ansible_local provisioner.
To run this you have to first install ansible on guest machine if it's not already installed. And this can be done from Vagrantfile shell provisioner with easy_install or pip.

Ansible Service Restart Failed

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

vagrant with ansible error

If I have Vagrantfile with ansible provision:
Vagrant.configure(2) do |config|
config.vm.box = 'hashicorp/precise32'
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.provision :ansible do |ansible|
ansible.playbook = "playbook.yml"
ansible.inventory_path = "hosts"
ansible.limit = 'all'
ansible.sudo = true
end
end
My hosts file is very simple:
[local]
web ansible_connection=local
and playbook.yml is:
---
- hosts: local
sudo: true
remote_user: vagrant
tasks:
- name: update apt cache
apt: update_cache=yes
- name: install apache
apt: name=apache2 state=present
When I start vagrant with wagrant up I got error:
failed: [web] => {"failed": true, "parsed": false}
[sudo via ansible, key=daxgehmwoinwalgbzunaiovnrpajwbmj] password:
What's the problem?
The error is occurring because ansible is assuming key based ssh authentication, however your vagrant is creating a VM is uses password based authentication.
There are two ways you can solve this issue.
You can run your ansible playbook as
ansible-playbook playbook.yml --ask-pass
This will tell ansible to not assume key-based authentication, instead use password based ssh authentication and ask one before execution.

Resources