Ansible to consolidate service status on multiple servers - ansible

Below playbook works where inventory groups has multiple servers and to check the status of each service and display its status on console.
Problem is, I wanted to consolidate the status of each service on every server and send in an email, but it seems array (uistatus_app, uistatus_status, uistatus_print) gets re-initiated for each server run.
Is there any way where I can store the status in the global variable of each run and display?
for example:
server1: nginx is active
server1: PHP is active
server2: nginx is active
server2: PHP is unknown
Thanks.
- name: checking services
shell: 'systemctl is-active {{item}}'
with_items:
- nginx
- php74-php-fpm
failed_when: false
register: uistatus
when: 'inventory_hostname in groups[''stg-ui]'
- name: get ui status in arrray
set_fact:
uistatus_app: '{{uistatus_app}}+[ ''{{item.item}}'']'
uistatus_status: '{{uistatus_app}}+[ ''{{item.stdout}}'']'
with_items: '{{ uistatus.results }}'
when: 'inventory_hostname in groups[''stg-ui]'
- name: consolidate
set_fact:
uistatus_print: '{{uistatus_print}}+{{inventory_hostname}}: {{item[0]}} is item[1]}}]'
loop: '{{ query(''together'',uistatus_app, uistatus.status }}'
when: 'inventory_hostname in groups[''stg-ui]'

Set the variable with the list of the statuses in each host, e.g.
- hosts: all
tasks:
- name: checking services
command: "echo {{ item }} is active"
loop:
- nginx
- php74-php-fpm
failed_when: False
register: uistatus
when: inventory_hostname in groups.stg_ui
- set_fact:
my_services: "{{ uistatus.results|default([])|
map(attribute='stdout')|
list }}"
- debug:
var: my_services
gives
ok: [server1] =>
my_services:
- nginx is active
- php74-php-fpm is active
ok: [server2] =>
my_services:
- nginx is active
- php74-php-fpm is active
Then run_once, extract the lists and create the dictionary all_services, e.g.
- set_fact:
all_services: "{{ dict(ansible_play_hosts|zip(stats)) }}"
vars:
stats: "{{ ansible_play_hosts|
map('extract', hostvars, 'my_services')|
list }}"
run_once: true
- debug:
var: all_services
run_once: true
gives
ok: [server1] =>
all_services:
server1:
- nginx is active
- php74-php-fpm is active
server2:
- nginx is active
- php74-php-fpm is active
The dictionary is available to all hosts in the playbook.

Related

how to make a list from ansible_facts with multiple hosts

I'm trying to make a list with IP addresses of various hosts and then use this list in another task. My question is, how can I pick an IP (I need the public IP) from the output of each host and add it to a list? I need the IPs that do not start with 10..
Later, I need to use this list in another task.
I extract this information by running this playbook:
- hosts: facts
become: true
gather_facts: True
tasks:
- debug:
msg: "The ip: {{ item }}"
with_items: "{{ ansible_all_ipv4_addresses }}"
Later, I need to use this list in another task:
- wait_for:
host: "{{ item[0] }}"
port: "{{ item[1] }}"
state: started
delay: 0
timeout: 2
delegate_to: localhost
become: false
ignore_errors: no
ignore_unreachable: yes
register: result
failed_when: not result.failed
with_nested:
- [ IP LIST HERE]
- [443,80,9200,9300,22,5432,6432]
You can access those values from the hostvars right away, then use a reject filter with a match test in order to reject what you don't want to test for.
Which, in a debug task would gives:
# note: ports list reduced for brevity
- debug:
msg: "I should wait for interface {{ item.0 }}:{{ item.1 }}"
loop: >-
{{
hostvars
| dict2items
| selectattr('key', 'in', ansible_play_hosts)
| map(attribute='value.ansible_all_ipv4_addresses', default=[])
| flatten
| reject('match', '10\..*')
| product(_ports)
}}
loop_control:
label: "{{ item.0 }}"
run_once: true
delegate_to: localhost
vars:
_ports:
- 22
- 80
In my lab, this give:
ok: [ansible-node-1 -> localhost] => (item=172.18.0.3) =>
msg: I should wait for interface 172.18.0.3:22
ok: [ansible-node-1 -> localhost] => (item=172.18.0.3) =>
msg: I should wait for interface 172.18.0.3:80
ok: [ansible-node-1 -> localhost] => (item=172.18.0.4) =>
msg: I should wait for interface 172.18.0.4:22
ok: [ansible-node-1 -> localhost] => (item=172.18.0.4) =>
msg: I should wait for interface 172.18.0.4:80
Try the example below
shell> cat pb.yml
- hosts: all
vars:
ip_list: "{{ ansible_play_hosts|
map('extract', hostvars, 'ansible_all_ipv4_addresses')|
map('first')|list }}"
ip_list_reject: "{{ ip_list|reject('match', '10\\.')|list }}"
tasks:
- setup:
gather_subset: network
- block:
- debug:
var: ip_list
- debug:
var: ip_list_reject
- wait_for:
host: "{{ item.0 }}"
port: "{{ item.1 }}"
state: started
delay: 0
timeout: 2
delegate_to: localhost
register: result
with_nested:
- "{{ ip_list_reject }}"
- [443, 80, 9200, 9300, 22, 5432, 6432]
run_once: true

How to list only the running services with ansible_facts

How to list all the services of a system with state=='running' without providing a list like the first code?
- name: Populate service facts
service_facts:
no_log: true
register: inspect_services2
when: "ansible_facts.services[] is defined and ansible_facts.services[].state == 'running' "
I have manage to list them only if I use a list:
- name: Running Services
debug:
msg: "{{ ansible_facts.services[item].state == 'running' }}"
when: "ansible_facts.services[item] is defined and ansible_facts.services[item].state == 'running' "
loop: "{{ inspect_services2 }}"
ignore_errors: yes
In a nutshell:
---
- name: work with service facts
hosts: localhost
tasks:
- name: gather service facts
service_facts:
- name: show running services
debug:
msg: "{{ ansible_facts.services | dict2items
| selectattr('value.state', '==', 'running') | items2dict }}"
This gives you a dict with all info for all running services. If e.g. you only want to display the names of those services, your could change the message in debug task to:
msg: "{{ ansible_facts.services | dict2items
| selectattr('value.state', '==', 'running') | map(attribute='key') }}"
You are of course absolutely free to use that result and put it in a variable somewhere as an alias to reuse it. Below a useless yet functional example creating a file with the service name on the target server just to illustrate:
---
- name: Work with service facts II
hosts: localhost
vars:
# Note1: this will be undefined until service facts are gathered
# Note2: this time this var will be a list of all dicts
# dropping the initial key wich is identical to `name`
running_services: "{{ ansible_facts.services | dict2items
| selectattr('value.state', '==', 'running') | map(attribute='value') }}"
tasks:
- name: gather service facts
service_facts:
- name: useless task creating a file with service name in tmp
copy:
content: "ho yeah. {{ item.name }} is running"
dest: "/tmp/{{ item.name }}.txt"
loop: "{{ running_services }}"
To list the running services only, just
- name: Loop over all services and print name
debug:
msg: "{{ item }}"
when:
- ansible_facts.services[item].state == 'running'
with_items: "{{ ansible_facts.services }}"
Thanks to
Ansible: How to get ... running services?

saving variables from playbook run to ansible host local file

I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!

How to set fact witch is visible on all hosts in Ansible role

I'm setting fact in a role:
- name: Check if manager already configured
shell: >
docker info | perl -ne 'print "$1" if /Swarm: (\w+)/'
register: swarm_status
- name: Init cluster
shell: >-
docker swarm init
--advertise-addr "{{ ansible_default_ipv4.address }}"
when: "'active' not in swarm_status.stdout_lines"
- name: Get worker token
shell: docker swarm join-token -q worker
register: worker_token_result
- set_fact:
worker_token: "{{ worker_token_result.stdout }}"
Then I want to access worker_token on another hosts. Here's my main playbook, the fact is defined in the swarm-master role
- hosts: swarm_cluster
become: yes
roles:
- docker
- hosts: swarm_cluster:&manager
become: yes
roles:
- swarm-master
- hosts: swarm_cluster:&node
become: yes
tasks:
- debug:
msg: "{{ worker_token }}"
I'm getting undefined variable. How to make it visible globally?
Of course it works perfectly if I run debug on the same host.
if your goal is just to access worker_token from on another host, you can use hostvars variable and iterate through the group where you've defined your variable like this:
- hosts: swarm_cluster:&node
tasks:
- debug:
msg: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
If your goal is to define the variable globally, you can add a step to define a variable on all hosts like this:
- hosts: all
tasks:
- set_fact:
worker_token_global: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
- hosts: swarm_cluster:&node
tasks:
- debug:
var: worker_token_global

How to pass results back from include_tasks

Started to learn ansible yesterday, so I believe I may risk XY problem here, but still…
The main yml:
- hosts: localhost
vars_files:
[ "users.yml" ]
tasks:
- name: manage instances
#include_tasks: create_instance.yml
include_tasks: inhabit_instance.yml
with_dict: "{{users}}"
register: res
- name: print
debug: msg="{{res.results}}"
inhabit_instance.yml:
- name: Get instance info for {{ item.key }}
ec2_instance_facts:
profile: henryaws
filters:
"tag:name": "{{item.key}}"
instance-state-name: running
register: ec2
- name: print
debug:
msg: "IP: {{ec2.instances.0.public_ip_address}}"
So that's that IP that I'd like to have on the top level. Haven't found anything right away about return values of the include block…
Well, I've found some way that suits me, maybe it's even canonical?
main.yml:
- hosts: localhost
vars_files:
[ "users.yml" ]
vars:
ec_results: {}
tasks:
- name: manage instances
#include_tasks: create_instance.yml
include_tasks: inhabit_instance.yml
with_dict: "{{users}}"
register: res
inhabit_instance.yml:
- name: Get instance info for {{ item.key }}
ec2_instance_facts:
profile: henryaws
filters:
"tag:name": "{{item.key}}"
instance-state-name: running
register: ec2
- name: update
set_fact:
ec_results: "{{ ec_results|combine({ item.key: ec2.instances.0.public_ip_address }) }}"
Thanks for you answer - this helped me to find an answer to my issue when my task was called in a loop - my solution was just to use lists so for your example above, the solution would be:
The main yml:
- hosts: localhost
vars_files:
[ "users.yml" ]
vars:
ec_results: []
tasks:
- name: manage instances
#include_tasks: create_instance.yml
include_tasks: inhabit_instance.yml
with_dict: "{{users}}"
- name: print
debug: msg="{{ec_results}}"
inhabit_instance.yml:
- name: Get instance info for {{ item.key }}
ec2_instance_facts:
profile: henryaws
filters:
"tag:name": "{{item.key}}"
instance-state-name: running
register: ec2
- name: Add IP to results
set_fact:
ec_results: "{{ ec_results + [ec2.instances.0.public_ip_address] }}"
In my code, include_tasks was in a loop so then ec_results contained a list of the results from each iteration of the loop

Resources