I am trying to provision vagrant with the playbook I wrote but when I do im getting an error about the host.
this is my Vagrant file:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
#config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.network "public_network", ip: "192.168.56.4", bridge: ["en0: Wi-Fi (Wireless)"]
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
this is my playbook.yml:
---
- name: Playbook to install and use Vault
become: true
hosts: server1
#gather_facts: no
tasks:
- name: Uptade1
become: true
become_user: root
shell: uname -a
register: bla
- debug:
msg: "{{ bla.stdout }}"
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
this is my inventory:
all:
hosts:
server1:
ansible_host: 192.168.56.4
the error I get while using this command - ansible-playbook -i /Users/idan/carebox/inventory.yml playbook.yml is:
fatal: [server1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.56.4 port 22: Operation timed out", "unreachable": true}
PLAY RECAP *********************************************************************
server1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
I tried changing the host ip in the inventory to 127.0.0.1 and it says the same but with "connection refused"
or 0.0.0.0 but it keeps giving this error.
I also tried making the ip in Vagrntfile to "private_key" and It also didn't work.
it only works if I am provisioning in the vagrant up but I want to do it manually after I start vagrant, with the command I wrote here above.
I would really like some help, thank you!
Related
I'm having an issue where the Ansible service module is failing due to a sudo password issue:
fatal: [192.168.1.10]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.1.10 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "rc": 1}
to retry, use: --limit #/Volumes/HD/Users/user/Ansible/playbooks/stop-homeassistant.retry
My playbook has just one task, to stop the service. It looks like:
---
- hosts: 192.168.1.10
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Or, in the case of systemd:
systemd: state=stopped name=home-assistant#homeassistant enabled=yes
I'm running the playbook like so:
ansible-playbook -u homeassistant playbooks/stop-homeassistant.yml
However, passwordless sudo is setup for that user on that box (in /etc/sudoers.d):
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl restart home-assistant#homeassistant
homeassistant ALL=(ALL) NOPASSWD:/bin/systemctl stop home-assistant#homeassistant
If I ssh into that box as homeassistant, and I run:
sudo systemctl stop home-assistant#homeassistant
The home-assistant#homeassistant service will stop cleanly without asking for a sudo password.
Any idea why the systemctl command would run perfectly as the user on the box, but then fail in the service/systemd module?
Try configuring passwordless sudo on your target machines:
homeassistant ALL=NOPASSWD: ALL
Configuring specific commands with a NOPASSWD flag in /etc/sudoers does not work with Ansible.
Details here: https://github.com/ansible/ansible/issues/5712
Ok, please modify your playbook as below:
hosts: 192.168.1.10
remote_user: home-assistant
become: true
become_method: sudo
become_user: root
tasks:
- name: Stop Homeassistant
become: true
service: name=home-assistant#homeassistant state=stopped enabled=yes
Now,
Run as ansible-playbook <playbook-name>.
If above command fails due to password, please run as
ansible-playbook playbook.yml --user=<username> --extra-vars "ansible_sudo_pass=<yourPassword>"
Hello guys I make a simple playbook to practice with Ansible but I have a problem when I try to run the playbook (ansible-playbook -i hosts.ini playbook.yml) to configure an instance ec2 the output returns:
> fatal: [XX.XXX.XXX.XXX]: FAILED! => {
> "changed": false,
> "failed": true,
> "invocation": {
> "module_name": "setup"
> },
> "module_stderr": "Shared connection to XXX.XXX.XXX.XXX closed.\r\n",
> "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n",
> "msg": "MODULE FAILURE" } to retry, use: --limit #/home/douglas/Ansible/ansible_praticing/projeto2.retry
>
> PLAY RECAP
> *********************************************************************
> XX.XXX.XXX.XXX : ok=0 changed=0 unreachable=0 failed=1
When I try to connect with the instance via ssh -i ~/.ssh/key.pem ubuntu#public.ip it works well but the provisioning not.
My playbook:
- hosts: projeto
sudo: True
remote_user: ubuntu
vars_files:
- vars.yml
tasks:
- name: "Update"
apt: update_cache=yes
- name: "Install the Ansible"
apt: name=ansible state=latest
- name: "Installt the mysql"
apt:
args:
name: mysql-server
state: latest
- name: "Install the Nginx"
apt:
args:
name: nginx
state: latest
My hosts.ini is also ok (with public ip of aws ec2 instance) and I put the public key (~/.ssh/id_rsa.pem of local machine) in the ~/.ssh/authorized_keys file, inside of the instance.
In the last week (Friday) this playbook was working well.
What am I doing wrong?
Maybe my answer is too late but I faced the same problem today. I have an Ubuntu 16.04 instance running on my EC2. I think, since it has Python 3 (Python 3.5) as its default Python installation. Hence, ansible is not able to find the required Python directory (/usr/bin/python). I got around this issue by changing the ansible Python interpreter to Python 3.
I added ansible_python_interpreter=/usr/bin/python3 to my inventory file and did not have to change the playbook.
Reference - http://docs.ansible.com/ansible/latest/python_3_support.html
---
- hosts: my-host
tasks:
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
vmware_guest_facts: yes
When I run this playbook, I get this error
PLAY [my-host]
TASK [setup]
******************************************************************* ok: [19.3.112.97 ]
TASK [vsphere_guest]
*********************************************************** fatal: [19.3.112.97 ]: FAILED! => {"changed": false, "failed": true, "msg":
"pysphere module required"}
NO MORE HOSTS LEFT
************************************************************* [WARNING]: Could not create retry file 'createvms.retry'.
[Errno 2] No such file or directory: ''
PLAY RECAP
19.3.112.97 : ok=1 changed=0 unreachable=0 failed=1
Why do I get this error? I have uninstalled and installed pysphere. I have used previous and current versions of it but I still get this error.
You usually want to run cloud/VM management modules from your control machine (localhost).
This would look like this:
---
- hosts: localhost
connection: local
tasks:
- vsphere_guest:
vcenter_hostname: vcenter.mydomain.local
username: myuser
password: mypass
guest: newvm001
vmware_guest_facts: yes
In this case ansible use PySphere installed on your control host to connect to vcenter.mydomain.local and provision VMs.
In your example PySphere should be installed on 19.3.112.97 and vcenter.mydomain.local should be accessible from that host.
I am trying to run an extremely simple playbook to test a new Ansible setup.
When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file:
[defaults]
host_key_checking=false
log_path=./logs/ansible.log
executable=/bin/bash
#callback_plugins=./lib/callback_plugins
######
[privilege_escalation]
become=True
become_method='sudo'
become_user='tstuser01'
become_ask_pass=False
[ssh_connection]
scp_if_ssh=True
I get the following error:
fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo'
FATAL: all hosts have already failed -- aborting
The playbook is also very simple:
# Checks the hosts provisioned by midrange
---
- name: Test su connecting as current user
hosts: all
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
#action: ping
command: /usr/bin/whoami
I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
The issue is with configuration; I also took it as an example and got the same problem. After playing awhile I noticed that the following works:
1) deprecated sudo:
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
2) new become
---
- hosts: all
become: yes
become_method: sudo
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
3) using ansible.cfg:
[privilege_escalation]
become = yes
become_method = sudo
and then in a playbook:
---
- hosts: all
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
since you "becoming" tstuser01 (not a root like me), please play a bit, probably user name should not be quoted too:
become_user = tstuser01
at least this is the way I define remote_user in ansible.cfg and it works... My issue resolved, hope yours too
I think you should use the sudo directive in the hosts section so that subsequent tasks can run with sudo privileges unless you explicitly specified sudo:no in a task.
Here's your playbook that I've modified to use sudo directive.
# Checks the hosts provisioned by midrange
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
command: /usr/bin/whoami
I am new with using Vagrant and Ansible. Currently I am stuck on Ansible telling me that it can't find apt-get command.
My Vagrant box runs on Ubuntu and here are the relevant files:
// Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :private_network, :ip => "192.168.33.10"
# make sure apt repo is up to date
config.vm.provision :shell, :inline => 'apt-get -qqy update'
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
end
// vagrant_ansible_inventory_default
# Generated by Vagrant
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 127.0.0.1
user: root
# remote_user: user
# sudo: yes
roles:
- nginx
// roles/nginx/tasks/main.yml
---
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
When I run vagrant provision, I get
[default] Running provisioner: shell...
[default] Running: inline script
stdin: is not a tty
[default] Running provisioner: ansible...
PLAY [Install MySQL, Nginx, Node.js, and Monit] *******************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [nginx | Installs nginx web server] *************************************
failed: [127.0.0.1] => {"cmd": "apt-get update && apt-get install python-apt -y -q",
"failed": true, "item": "", "rc": 127}
stderr: /bin/sh: apt-get: command not found
msg: /bin/sh: apt-get: command not found
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/foosbar/playbook.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
What am I missing?
What happens if you run the play against the IP address you've actually assigned to vagrant?
// playbook.yml
---
- name: Install MySQL, Nginx, Node.js, and Monit
hosts: 192.168.33.10
The hosts on which you want to run the play is the vagrant. hosts in this case doesn't refer to the master, but the nodes.