Unable to access hosts beside local host in Ansible - ansible

I'm self-learning ansible and having the files structure below with my customized ansible.cfg and inventory file:
---ansible (Folder) <---Working level
---ansible.cfg (Customized cfg)
---dev (Inventory File)
---playbooks (Folder) <--Issue level
---hostname.yml
ansible.cfg
[defaults]
inventory=./dev
dev
[loadbalancer]
lb01
[webserver]
app01
app02
[database]
db01
[control]
control ansible_connection=localansible
hostname.yml
---
- hosts: all
tasks:
- name: get server hostname
command: hostname
My question is when I run ansible-playbook playbooks/hostname.yml at ansible (Folder) level, everything works fine
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [control]
ok: [app01]
ok: [db01]
ok: [lb01]
ok: [app02]
TASK [get server hostname] *****************************************************
changed: [db01]
changed: [app01]
changed: [app02]
changed: [lb01]
changed: [control]
PLAY RECAP *********************************************************************
app01 : ok=2 changed=1 unreachable=0 failed=0
app02 : ok=2 changed=1 unreachable=0 failed=0
control : ok=2 changed=1 unreachable=0 failed=0
db01 : ok=2 changed=1 unreachable=0 failed=0
lb01 : ok=2 changed=1 unreachable=0 failed=0
Howevery when I go into the playbooks folder and run ansible-playbook hostname.yml it give me warning saying
[WARNING]: provided hosts list is empty, only localhost is available
PLAY RECAP *********************************************************************
Is there anything that prevents playbook from accessing the inventory file?

Ansible does not traverse parent directories in search for the configuration file ansible.cfg. If you want to use a custom configuration file, you need to place it in the current directory and run Ansible executables from there or define an environment variable ANSIBLE_CONFIG pointing to the file.
In your case: changing the directory to ansible and executing is the proper way:
ansible-playbook playbooks/hostname.yml
I don't know where from you got the idea Ansible would check for the configuration file in the parent directory - the documentation is unambiguous:
Changes can be made and used in a configuration file which will be processed in the following order:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg

Related

ansible-playbook ignores '--check' parameter?

As testing a simple Ansible playbook
---
- hosts: mikrotiks
connection: network_cli
gather_facts: no
vars:
ansible_network_os: routeros
ansible_user: admin
tasks:
- name: Add Basic FW Rules
routeros_command:
commands:
- /ip firewall nat add chain=srcnat out-interface=ether1 action=masquerade
on my mikrotik router, I used the command with --check argument
ansible-playbook -i hosts mikrotik.yml --check
but it seems that tasks actually got executed.
PLAY [mikrotiks] **************************************************************************************************************************************
TASK [Add Basic FW Rules] **************************************************************************************************************************************
changed: [192.168.1.82]
PLAY RECAP **************************************************************************************************************************************
192.168.1.82 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible.cfg file is the default configuration after fresh install.
According the documentation command module – Run commands on remote devices running MikroTik RouterOS
The module always indicates a (changed) status. You can use the changed_when task property to determine whether a command task actually resulted in a change or not.
Since the module is part of the community.routeros collection I had a short look into the source there and found that is supporting check_mode according
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
So you will need to follow up with defining "changed".

Using become in ansible locally

I am trying to understand --become in order to use ansible to do some local task on my centos. I tried several ansible modules (copy, unarchive) with become that each result with diffetent kind of errors.
Platform used: centos 7
Ansible (installed in a python 3 virtual env) version:
(ansible) [maadam#linux update_centos]$ ansible --version
ansible 2.10.16
config file = None
configured module search path = ['/home/maadam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/maadam/Sources/python/venv/ansible/lib64/python3.6/site-packages/ansible
executable location = /home/maadam/Sources/python/venv/ansible/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
I tried to reproduice the example provided by #techraf in this issue to test become: Using --become for ansible_connection=local.
I used the same playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- command: whoami
register: whoami
- debug:
var: whoami.stdout
So I hope the same result as this:
(ansible) [maadam#linux update_centos]$ sudo whoami
root
Whithout become:
ansible) [maadam#linux update_centos]$ ansible-playbook playbook.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [command] *****************************************************************************************
changed: [localhost]
TASK [debug] *******************************************************************************************
ok: [localhost] => {
"whoami.stdout": "maadam"
}
PLAY RECAP *********************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
With become I have this error:
(ansible) [maadam#linux update_centos]$ ansible-playbook playbook.yml --become
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [command] *****************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/var/tmp/sclPip796: line 8: -H: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP *********************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So I don't understand what I am missing with become.
Thanks for your helps
in ansible.cfg file check for the become_method. you can use "sudo su -".
I don't know if I handle this correctly but if I run my playbook as root, I have no error:
(ansible) [maadam#linux update_centos]$ sudo ansible-playbook playbook.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [command] ****************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"whoami.stdout": "root"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Not sure it is the right way to doing things in local with ansible. Sure if you are already root, no need for privilege escalation.

How to sequentially run all the tasks per host in Ansible

I am writing an Ansible playbook where I have 3 tomcat nodes. Now my script does lot of things like copying the release from nexus to application nodes, deploying it, starting tomcats etc.
What i want to achieve is to do this sequentially. For example the playbook should run for one host and when it gets completed, it should start for another host. My inventory looks like below and I am using group_vars as I have multiple environments like prod, preprod etc.
Can someone help.
[webserver]
tomcat1
tomcat2
tomcat3
Q: "The playbook should run for one host and when it gets completed, it should start for another host."
A: Use serial. For example
shell> cat playbook.yml
- hosts: webserver
gather_facts: false
serial: 1
tasks:
- debug:
var: inventory_hostname
shell> ansible-playbook playbook.yml
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat1] => {
"inventory_hostname": "tomcat1"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat2] => {
"inventory_hostname": "tomcat2"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat3] => {
"inventory_hostname": "tomcat3"
}
PLAY RECAP ***
tomcat1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
See How to build your inventory
shell> cat hosts
[webserver]
tomcat1
tomcat2
tomcat3

"apt" module in Ansible playbook execution randomly fails on different hosts each time with message "Failed to lock apt for exclusive operation"

When i try to install a list of packages('apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv') on on a group of hosts (app01 and app02),
it fails randomly either on app01 or app02.
After a first few retries the required packages are already installed on the host. Subsequent execution continue to fail randomly on app01 or app02 instead of returning a simple success.
The ansible controller and hosts are all running ubuntu 16.04
The controller has ansible 2.8.4
I have already set "become: yes" in playbook.
I have already tried running apt-get clean and apt-get update. on the hosts
Inventory file
[webserver]
app01 ansible_python_interpreter=python3
app02 ansible_python_interpreter=python3
Playbook
---
- hosts: webserver
tasks:
- name: install web components
become: yes
apt:
name: ['apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv']
state: present
update_cache: yes
In the first run , the execution fails on host app01
In the second run, the execution fails on host app02
1 st Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app01]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app02] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
app02 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2nd Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app02]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app01] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
app02 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The behavior is a bit strange… one succeed, and other failing each time.
My guess is that your app01 and app02 are actually the same host! If so, it sounds normal that the 2 jobs fight against each other.

Two different version of ansible gives two different outputs for same ansible playbook

- hosts: Ebonding
become: yes
become_method: sudo
tasks
- name: Clearing cache of Server4
file: path=/weblogic/bea/user_projects/domains/tmp state=absent
become: yes
become_user: wls10
Ansible version 2.0.0.0 run the above playbook successfully::
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [ggnqinfa2]
TASK [Clearing cache of Server4] ***********************************************
ok: [ggnqinfa2]
PLAY RECAP *********************************************************************
ggnqinfa2 : ok=2 changed=0 unreachable=0 failed=0
But latest version of ansible 2.5.0rc2 encountered below error::
PLAY [Ebonding] *****************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************
ok: [ggnqinfa2]
TASK [Clearing cache of Server4] ************************************************************************************************************************************
fatal: [ggnqinfa2]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 2, err: chown: /var/tmp/ansible-tmp-1520704924.34-191458796685785/: Not owner\nchown: /var/tmp/ansible-tmp-1520704924.34-191458796685785/file.py: Not owner\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
PLAY RECAP **********************************************************************************************************************************************************
ggnqinfa2 : ok=1 changed=0 unreachable=0 failed=1
How can i run this playbook by latest version of ansible successfully?
Chances are the user you're using (wls10) does not have write access to the remote temporary directory /var/tmp.
This can be overridden using ansible.cfg and set via remote_tmp to a directory you have write-access to -- or, a "normal temp directory" (like /tmp) that has the sticky bit set.
For more info, see
http://docs.ansible.com/ansible/latest/intro_configuration.html#remote-tmp

Resources