How to sequentially run all the tasks per host in Ansible - ansible

I am writing an Ansible playbook where I have 3 tomcat nodes. Now my script does lot of things like copying the release from nexus to application nodes, deploying it, starting tomcats etc.
What i want to achieve is to do this sequentially. For example the playbook should run for one host and when it gets completed, it should start for another host. My inventory looks like below and I am using group_vars as I have multiple environments like prod, preprod etc.
Can someone help.
[webserver]
tomcat1
tomcat2
tomcat3

Q: "The playbook should run for one host and when it gets completed, it should start for another host."
A: Use serial. For example
shell> cat playbook.yml
- hosts: webserver
gather_facts: false
serial: 1
tasks:
- debug:
var: inventory_hostname
shell> ansible-playbook playbook.yml
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat1] => {
"inventory_hostname": "tomcat1"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat2] => {
"inventory_hostname": "tomcat2"
}
PLAY [webserver] ***
TASK [debug] ***
ok: [tomcat3] => {
"inventory_hostname": "tomcat3"
}
PLAY RECAP ***
tomcat1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
tomcat3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
See How to build your inventory
shell> cat hosts
[webserver]
tomcat1
tomcat2
tomcat3

Related

Using become in ansible locally

I am trying to understand --become in order to use ansible to do some local task on my centos. I tried several ansible modules (copy, unarchive) with become that each result with diffetent kind of errors.
Platform used: centos 7
Ansible (installed in a python 3 virtual env) version:
(ansible) [maadam#linux update_centos]$ ansible --version
ansible 2.10.16
config file = None
configured module search path = ['/home/maadam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/maadam/Sources/python/venv/ansible/lib64/python3.6/site-packages/ansible
executable location = /home/maadam/Sources/python/venv/ansible/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
I tried to reproduice the example provided by #techraf in this issue to test become: Using --become for ansible_connection=local.
I used the same playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- command: whoami
register: whoami
- debug:
var: whoami.stdout
So I hope the same result as this:
(ansible) [maadam#linux update_centos]$ sudo whoami
root
Whithout become:
ansible) [maadam#linux update_centos]$ ansible-playbook playbook.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [command] *****************************************************************************************
changed: [localhost]
TASK [debug] *******************************************************************************************
ok: [localhost] => {
"whoami.stdout": "maadam"
}
PLAY RECAP *********************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
With become I have this error:
(ansible) [maadam#linux update_centos]$ ansible-playbook playbook.yml --become
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [command] *****************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/var/tmp/sclPip796: line 8: -H: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP *********************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So I don't understand what I am missing with become.
Thanks for your helps
in ansible.cfg file check for the become_method. you can use "sudo su -".
I don't know if I handle this correctly but if I run my playbook as root, I have no error:
(ansible) [maadam#linux update_centos]$ sudo ansible-playbook playbook.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [command] ****************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [localhost] => {
"whoami.stdout": "root"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Not sure it is the right way to doing things in local with ansible. Sure if you are already root, no need for privilege escalation.

"apt" module in Ansible playbook execution randomly fails on different hosts each time with message "Failed to lock apt for exclusive operation"

When i try to install a list of packages('apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv') on on a group of hosts (app01 and app02),
it fails randomly either on app01 or app02.
After a first few retries the required packages are already installed on the host. Subsequent execution continue to fail randomly on app01 or app02 instead of returning a simple success.
The ansible controller and hosts are all running ubuntu 16.04
The controller has ansible 2.8.4
I have already set "become: yes" in playbook.
I have already tried running apt-get clean and apt-get update. on the hosts
Inventory file
[webserver]
app01 ansible_python_interpreter=python3
app02 ansible_python_interpreter=python3
Playbook
---
- hosts: webserver
tasks:
- name: install web components
become: yes
apt:
name: ['apache2', 'libapache2-mod-wsgi', 'python-pip', 'python-virtualenv']
state: present
update_cache: yes
In the first run , the execution fails on host app01
In the second run, the execution fails on host app02
1 st Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app01]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app02] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
app02 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2nd Run
shekhar#control:~/ansible$ ansible-playbook -v playbooks/webservers.yml
Using /home/shekhar/ansible/ansible.cfg as config file
PLAY [webserver] ******************************************************************************
TASK [Gathering Facts] ******************************************************************************
ok: [app01]
ok: [app02]
TASK [install web components] ******************************************************************************
fatal: [app02]: FAILED! => {"changed": false, "msg": "Failed to lock apt for exclusive operation"}
ok: [app01] => {"cache_update_time": 1568203558, "cache_updated": false, "changed": false}
PLAY RECAP ******************************************************************************
app01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
app02 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The behavior is a bit strange… one succeed, and other failing each time.
My guess is that your app01 and app02 are actually the same host! If so, it sounds normal that the 2 jobs fight against each other.

Ansible docker connection with identical container names on more than one host

I'm trying to run an ansible playbook against multiple hosts that are running containers using the same name. There are 3 hosts each running a container called "web". I'm trying to use the docker connection.
I'm using the typical pattern of a hosts file which works fine for running ansible modules on the host.
- name: Ping
ping:
- name: Add web container to inventory
add_host:
name: web
ansible_connection: docker
ansible_docker_extra_args: "-H=tcp://{{ ansible_host }}:2375"
ansible_user: root
changed_when: false
- name: Remove old logging directory
delegate_to: web
file:
state: absent
path: /var/log/old_logs
It only works against the first host in the hosts file
PLAY [all]
TASK [Gathering Facts]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Ping]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Add sensor container to inventory]
ok: [web1]
PLAY RECAP
web1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web2 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web3 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've tried setting name to web_{{ ansible_host }} to make it unique between hosts but it then tries to connect to web_web1. I've been running the commands using sudo docker exec web rm -rf /var/log/old_logs which of course works, but I'd like to be able to use the ansible modules directly in the docker containers.
The result you get is absolutely expected. Quoting the add_host documentation
This module bypasses the play host loop and only runs once for all the hosts in the play, if you need it to iterate use a with-loop construct.
i.e. you cannot rely on the hosts loop for the add_host and need to make a loop yourself.
Moreover, you definitely need to have different names (i.e. inventory_hostname) for your dynamically created hosts but since all your docker containers have the same name, their ansible_host should be the same.
Assuming all your docker host machines are in the group dockerhosts, the following playbook should do the job. I'm currently not in a situation where I can test this myself so you may have to adjust a bit. Let me know if it helped you and if I have to edit my answer.
Note that even though the add_host task will not naturally loop, I kept the hosts on your original group in the first play so that facts are gathered correctly and correctly populated in the hostvars magic variable
---
- name: Create dynamic inventory for docker containers
hosts: dockerhosts
tasks:
- name: Add web container to inventory
add_host:
name: "web_{{ item }}"
groups:
- dockercontainers
ansible_connection: docker
ansible_host: web
ansible_docker_extra_args: "-H=tcp://{{ hostvars[item].ansible_host }}:2375"
ansible_user: root
loop: "{{ groups['dockerhosts'] }}"
- name: Play needed commands on containers
hosts: dockercontainers
tasks:
- name: Remove old logging directory
file:
state: absent
path: /var/log/old_logs

Ansible + Cisco idempotence

So I'm doing some testing with Ansible to manage Cisco devices (specifically a 3750 in this case).
I'm able to add my VLAN's and Loopbacks with no issue.
Just trying to get Ansible to stop registering a change in the task when the Loopback or VLAN exists.
Right now my play looks like this:
- name: Set the IP for Loop 0
ios_config:
provider: "{{ connection }}"
lines:
- description AnsibleLoop0
- ip address 8.8.8.8 255.255.255.0
before:
- interface Loopback0
match: exact
Anytime this task is run, Ansible registers it like a change:
changed: [switch] => {"changed": true, "updates": ["interface Loopback0", "description AnsibleLoop0", "ip address 8.8.8.8 255.255.255.0"], "warnings": []}
I've tried different match types (line, exact) to no avail.
Maybe I'm getting something wrong here.
I thought that if I had the play insert the lines exactly like how they would show up in a show run it wouldnt register it as a change?
Any help here would be appreciated!
You could be experiencing this issue:
https://github.com/ansible/ansible/pull/24345
Are you saving the configuration changes each time to the Cisco device using save: "yes” in your playbook? I had the same problem and I was saving the config each time the playbook ran. Setting that to no just to test, after the first run reports changed=True sequential runs would report back changed=False by Ansible.
I know this thread has been closed a long ago but just had the same problem and found what was causing it for me.
I am configuring L3 interfaces using a json file as a source, on which I did not realize I was using the value for interface name on uppercase like this: Loopback123
This is the result when using that:
changed: [NX-SPINE-2] => (item={'interface': 'Loopback123', 'ip': '123.1.2.8/32', 'description': 'Loopback interface for Ansible Testing'})
changed: [NX-SPINE-1] => (item={'interface': 'Loopback123', 'ip': '123.1.2.7/32', 'description': 'Loopback interface for Ansible Testing'})
changed: [NX-LEAF-3] => (item={'interface': 'Loopback123', 'ip': '123.1.2.5/32', 'description': 'Loopback interface for Ansible Testing'})
changed: [NX-LEAF-2] => (item={'interface': 'Loopback123', 'ip': '123.1.2.4/32', 'description': 'Loopback interface for Ansible Testing'})
changed: [NX-LEAF-4] => (item={'interface': 'Loopback123', 'ip': '123.1.2.6/32', 'description': 'Loopback interface for Ansible Testing'})
PLAY RECAP ****************************************************************************************************************************************************
NX-LEAF-1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-SPINE-1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-SPINE-2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This is due to interface name actually being different to how the device treats and displays the name back when showing the configuration.
interface loopback0
description Management - BGP peering
Once I corrected the interface name to lower case on the source file, this solved the idempotency check for me.
ok: [NX-LEAF-4] => (item={'interface': 'loopback123', 'ip': '123.1.2.6/32', 'description': 'Loopback interface for Ansible Testing'})
ok: [NX-LEAF-3] => (item={'interface': 'loopback123', 'ip': '123.1.2.5/32', 'description': 'Loopback interface for Ansible Testing'})
PLAY RECAP ****************************************************************************************************************************************************
NX-LEAF-1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-3 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-LEAF-4 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-SPINE-1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
NX-SPINE-2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Hope it helps.
Cheers!

Unable to access hosts beside local host in Ansible

I'm self-learning ansible and having the files structure below with my customized ansible.cfg and inventory file:
---ansible (Folder) <---Working level
---ansible.cfg (Customized cfg)
---dev (Inventory File)
---playbooks (Folder) <--Issue level
---hostname.yml
ansible.cfg
[defaults]
inventory=./dev
dev
[loadbalancer]
lb01
[webserver]
app01
app02
[database]
db01
[control]
control ansible_connection=localansible
hostname.yml
---
- hosts: all
tasks:
- name: get server hostname
command: hostname
My question is when I run ansible-playbook playbooks/hostname.yml at ansible (Folder) level, everything works fine
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [control]
ok: [app01]
ok: [db01]
ok: [lb01]
ok: [app02]
TASK [get server hostname] *****************************************************
changed: [db01]
changed: [app01]
changed: [app02]
changed: [lb01]
changed: [control]
PLAY RECAP *********************************************************************
app01 : ok=2 changed=1 unreachable=0 failed=0
app02 : ok=2 changed=1 unreachable=0 failed=0
control : ok=2 changed=1 unreachable=0 failed=0
db01 : ok=2 changed=1 unreachable=0 failed=0
lb01 : ok=2 changed=1 unreachable=0 failed=0
Howevery when I go into the playbooks folder and run ansible-playbook hostname.yml it give me warning saying
[WARNING]: provided hosts list is empty, only localhost is available
PLAY RECAP *********************************************************************
Is there anything that prevents playbook from accessing the inventory file?
Ansible does not traverse parent directories in search for the configuration file ansible.cfg. If you want to use a custom configuration file, you need to place it in the current directory and run Ansible executables from there or define an environment variable ANSIBLE_CONFIG pointing to the file.
In your case: changing the directory to ansible and executing is the proper way:
ansible-playbook playbooks/hostname.yml
I don't know where from you got the idea Ansible would check for the configuration file in the parent directory - the documentation is unambiguous:
Changes can be made and used in a configuration file which will be processed in the following order:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg

Resources