How to Print the Version of an updated Package? - ansible

I'm just trying to get a notification from my weekly Update Ansible Task.
The Idea is to get a notification when a specific repository got an update, e.g. "Jellyfin got the update 10.8.7-1".
So far the Playbook look like this:
- name: Manages Packages
hosts: ubuntu
become: true
tasks:
- name: Cache Update and Upgrade
apt:
update_cache: yes
cache_valid_time: 86400
upgrade: full
- name: Create Register
shell: grep -E "^$(date +%Y-%m-%d).+ (install|upgrade) " /var/log/dpkg.log |cut -d " " -f 3-5
register: result
- name: Show Output Register
debug: msg="{{ result.stdout_lines }}"
when: result.stdout_lines is defined
- name: Send notify to Telegram
telegram:
token: '###'
chat_id: '###'
msg: '{{ansible_hostname}} updated {{ result.stdout_lines | length }} Packages : {{ result.stdout_lines }}'
when: result.stdout_lines | length > 0
As workaroud I Print all upgraded Packages. But I hope on a elegant solution.
the interessting result.stdout_lines could be:
"upgrade jellyfin-server:amd64 10.8.7-1",
"upgrade jellyfin-web:all 10.8.7-1",
"upgrade jellyfin:all 10.8.7-1",
I tried:
- name: Check Jellyfin Upgrade
telegram:
token: '###'
chat_id: '###'
msg: 'Jellyfin got an New Update !'
when: '"upgrade jellyfin:all 10.8.7-1" in result.stdout_lines'
but this need the whole String to work, that's not the point and I have no idea how to extract just the Version into a notification.
Have someone an idea?
Thanks in advance!

Create a dictionary of installed and upgraded packages.
Use module community.general.read_csv to parse the log
- name: Parse dpkg.log
community.general.read_csv:
path: /var/log/dpkg.log
delimiter: ' '
fieldnames: [date, time, p1, p2, p3, p4]
register: dpkg_log
See the registered variable. The list of the dictionaries will be in the attribute dpkg_log.list
- debug:
var: dpkg_log
when: debug1|d(false)|bool
Group the items by date, select today's items, select install/upgrade items, and create the dictionary of packages and their versions. Declare the variables
dpkg_log_dict: "{{ dict(dpkg_log.list|groupby('date')) }}"
today: "{{ '%Y-%m-%d'|strftime }}"
pkg_arch_ver: "{{ dpkg_log_dict[today]|
selectattr('p1', 'in', ['install', 'upgrade'])|
items2dict(key_name='p2', value_name='p3') }}"
gives
pkg_arch_ver:
ansible-core:all: 2.12.9-1ppa~focal
containerd.io:amd64: 1.6.8-1
docker-ce-cli:amd64: 5:20.10.18~3-0~ubuntu-focal
docker-ce-rootless-extras:amd64: 5:20.10.18~3-0~ubuntu-focal
docker-ce:amd64: 5:20.10.18~3-0~ubuntu-focal
docker-scan-plugin:amd64: 0.17.0~ubuntu-focal
Remove the architecture from the keys
pkg_ver: "{{ dict(pkg_arch_ver.keys()|map('split', ':')|map('first')|list|
zip(pkg_arch_ver.values())) }}"
gives
pkg_ver:
ansible-core: 2.12.9-1ppa~focal
containerd.io: 1.6.8-1
docker-ce: 5:20.10.18~3-0~ubuntu-focal
docker-ce-cli: 5:20.10.18~3-0~ubuntu-focal
docker-ce-rootless-extras: 5:20.10.18~3-0~ubuntu-focal
docker-scan-plugin: 0.17.0~ubuntu-focal
Reporting should be trivial now
- community.general.mail:
to: admin
subject: Ansible Upgrade
body: |
ansible-core updated to version: {{ pkg_ver['ansible-core'] }}
when: pkg_ver.keys() is contains 'ansible-core'
gives
From: root#test.example.org
To: admin#test.example.org
Cc:
Subject: Ansible Upgrade
Date: Sat, 31 Dec 2022 04:15:21 +0100
X-Mailer: Ansible mail module
ansible-core updated to version: 2.12.9-1ppa~focal
Example of a complete playbook for testing
- hosts: localhost
become: true
vars:
dpkg_log_dict: "{{ dict(dpkg_log.list|groupby('date')) }}"
today: "{{ '%Y-%m-%d'|strftime }}"
pkg_arch_ver: "{{ dpkg_log_dict[today]|
selectattr('p1', 'in', ['install', 'upgrade'])|
items2dict(key_name='p2', value_name='p3') }}"
pkg_ver: "{{ dict(pkg_arch_ver.keys()|map('split', ':')|map('first')|list|
zip(pkg_arch_ver.values())) }}"
tasks:
- name: Cache Update and Upgrade
apt:
update_cache: yes
cache_valid_time: 86400
upgrade: full
when: upgrade|d(false)|bool
- name: Parse dpkg.log
community.general.read_csv:
path: /var/log/dpkg.log
delimiter: ' '
fieldnames: [date, time, p1, p2, p3, p4]
register: dpkg_log
- debug:
var: dpkg_log
when: debug1|d(false)|bool
- debug:
var: pkg_arch_ver
- debug:
var: pkg_ver
- community.general.mail:
to: admin
subject: Ansible Upgrade
body: |
ansible-core updated to version: {{ pkg_ver['ansible-core'] }}
when: pkg_ver.keys() is contains 'ansible-core'

Related

Ansible - Prevent playbook executing simultaneously

I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...

ansible looping through the result and finding files

I am using ansible to configire 10-20 linux systems. I have a set of tools that I define in my invetory files with versions, as:
tools:
- tool: ABC
version: 7.8
- tool: XYZ
version: 8.32.1
Now, in my playback yml file, I would like to loop through them and have the necessay installation logic. Such as:
DEBUG tools loop
- name: Find installer files
copy:
src=
with_items:
- "{{ tools }}"
when:
tools.tool == "ABC"
In my case, {{tools.tool}}/{{tools.version}} has a tgz file which I need to unarchive at a remote location. Do you know how to do this? I have tried these:
- name: Find installer files
vars:
files: {{ lookup("fileglob",'tools/{{item.tool}}/linux/{{item.version}}/*') }}
unarchive:
src: "{{ files }}"
dest: "tools/{{item.tool}}/{{item.version}}/"
with_items:
- "{{ tools }}"
when:
item.tool == "ABC"
- name: Find installer files
debug:
msg: "{{ item}}"
with_items:
- "{{ tools }}"
with_fileglob:
- "tools/{{item.tool}}/linux/{{item.version}}/*"
when:
item.toolchain == "ABC"
But none worked. Thanks for the help.
It's not that simple as Your own solution breaks if there are multiple files in the directory, I assume.
So if You only have one file in the directory I wouldn't use fileglob at all but define a fixed name for it that You can generate knowing tool and version.
I also see the need for such sort of things often but did not found any nice solution for that. Only such ugly thing as:
- name: example book
hosts: localhost
become: false
gather_facts: false
vars:
tools:
- tool: ABC
version: 7.8
- tool: XYZ
version: 8.32.1
tools_files: []
tasks:
- name: prepare facts
set_fact:
tools_files: "{{ tools_files + [{'tool': item.tool | string, 'version': item.version | string, 'files': lookup('fileglob', 'tools/' ~ item.tool ~ '/linux/' ~ item.version ~ '/*', wantlist=True)}] }}"
with_items:
- "{{ tools }}"
- name: action loop
debug:
msg: "{{ {'src': item[1], 'dest': 'tools/' ~ item[0].tool ~ '/' ~ item[0].version ~ '/'} }}"
with_subelements:
- "{{ tools_files }}"
- files
when:
item[0].tool == "ABC"
or
- name: example book
hosts: localhost
become: false
gather_facts: false
vars:
tools:
- tool: ABC
version: 7.8
- tool: XYZ
version: 8.32.1
tools_files: []
tasks:
- name: prepare facts
set_fact:
tools_files: "{{ tools_files + [{'tool': item.tool | string, 'version': item.version | string, 'files': lookup('fileglob', 'tools/' ~ item.tool ~ '/linux/' ~ item.version ~ '/*', wantlist=True)}] }}"
with_items:
- "{{ tools }}"
- name: action loop
debug:
msg: "{{ {'src': item[1], 'dest': 'tools/' ~ item[0].tool ~ '/' ~ item[0].version ~ '/'} }}"
with_items:
- "{{ tools_files | subelements('files') }}"
when:
item[0].tool == "ABC"
Maybe I miss somthing because such things are a very basic feature (looping throug an array generating a result array beeing able to use all functions available and not just map using filters where some important things are just not available or cannot be used because map gives the inport as first argument to filter always).
That was actually simple. This worked for me:
- name: Find installer files
unarchive:
src:
"lookup('fileglob','tools/item.tool/linux/item.version/*') }}"
dest: "tools/{{item.tool}}/{{item.version}}/"
with_items:
- "{{ tools }}"
when:
item.tool == "ABC"

Ansible: variable in variable / loop or iterating through item

Here is a simple vars file we have to debug
./roles/test/vars/{{ ansible_distribution|lower }}/apt-packages.yml
packages:
required:
- htop
# - aptitude
package:
htop:
allow_unauthenticated: no
autoclean: no
autoremove: no
cache_valid_time: 0
# default_release:
force: no
force_apt_get: no
install_recommends: yes
only_upgrade: no
purge: no
state: latest
update_cache: yes
upgrade: no
Here is a simple task to debug
./roles/test/tasks/main.yml
- name: "Register variable"
include_vars:
#dir: vars/ubuntu
file: "vars/{{ ansible_distribution|lower }}/apt-packages.yml"
name: apt_install
- name: "This a test"
apt:
name: "{{item}}"
cache_valid_time: "{{ apt_install.package[item].cache_valid_time }}"
state: "{{ apt_install.package[item].state }}"
update_cache: "{{ apt_install.package[item].update_cache }}"
with_items: "{{ apt_install.packages.required }}"
./roles/test-playbook.yml
- name: "playbook test"
hosts: localhost
roles:
- role: test
become: true
become_user: root
become_method: sudo
using following answer stackoverflow.com/questions/29276198 we are trying to loop through items and get item related values.
Tasks is looping well through items but it is impossible to retrieve related variable with [item] syntax or any other one we have tested.
We have always the same error
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: dict object has no element [u'htop']
But calling directly the variable works
- name: "echo variable test"
debug:
msg: "{{ apt_install.package.htop.allow_unauthenticated }}"
What is the right syntax to get current loop value of a variable and use it inside another variable to retrieve related value ... (inside the same task) ?
So far it's we who are going around in circles without end !
Kind Regards
This is fully working using a loop / loop_control instead of with_items
- name: "This a test"
apt:
name: "{{item}}"
cache_valid_time: "{{ apt_install.package[item].cache_valid_time }}"
state: "{{ apt_install.package[item].state }}"
update_cache: "{{ apt_install.package[item].update_cache }}"
loop: "{{ apt_install.packages.required|flatten(levels=1) }}"
loop_control:
index_var: index
So as it we can define different settings for each package and for each distrib.
Now I can export too settings for each package in different var files.
Learning is hard :)

Adding field to dict items

Consider the following play. What I am trying to do is add a field, tmp_path which is basically the key and revision appended together to each element in the scripts dict.
---
- hosts: localhost
connection: local
gather_facts: no
vars:
scripts:
a.pl:
revision: 123
b.pl:
revision: 456
tasks:
- with_dict: "{{ scripts }}"
debug:
msg: "{{ item.key }}_{{ item.value.revision }}"
# - with_items: "{{ scripts }}"
# set_fact: {{item.value.tmp_path}}="{{item.key}}_{{item.value.revision}}"
# - with_items: "{{ scripts }}"
# debug:
# msg: "{{ item.value.tmp_path }}"
...
Obviously the commented code doesn't work, any idea how I can get this working? Is it possible to alter the scripts dict directly, or should I somehow be creating a new dict to reference instead?
By the way welcome to correct the terminology for what I am trying to do.
OK, I think I got a solution (below), at least to let me move forwards with this. Disadvantages are it has removed the structure of my dict and also seems a bit redundant having to redefine all the fields and use a new variable, If anyone can provide a better solution I will accept that instead.
---
- hosts: localhost
connection: local
gather_facts: no
vars:
scripts:
a.pl:
revision: 123
b.pl:
revision: 456
tasks:
- with_dict: "{{ scripts }}"
debug:
msg: "{{ item.key }}_{{ item.value.revision }}"
- with_dict: "{{ scripts }}"
set_fact:
new_scripts: "{{ (new_scripts | default([])) + [ {'name': item.key, 'revision': item.value.revision, 'tmp_path': item.key ~ '_' ~ item.value.revision}] }}"
# - debug:
# var: x
# - with_dict: "{{ scripts }}"
- with_items: "{{ new_scripts }}"
debug:
msg: "{{ item.tmp_path }}"
...
BTW credit to the following question which pointed me in the right direction:
Using Ansible set_fact to create a dictionary from register results

Ansible conditional handlers for a task?

FYI, I'm running my Ansible playbook in "pull-mode" not push mode. Therefore, my nodes will publish the results of their task via Hipchat.
With that said, I have a task that installs RPMs. When the installs are successful, the nodes notify via Hipchat that the task was successfully run. Now, in the event that a task fails, i force it to notify hipchat w/ the "--force-handlers" paramter. My question, is there a way to call a particular handler depending on the outcome of the task?
Task
- name: Install Perl modules
command: sudo rpm -Uvh {{ rpm_repository }}/{{ item.key }}-{{ item.value.svn_tag }}.rpm --force
with_dict: deploy_modules_perl
notify: announce_hipchat
Handler
- name: announce_hipchat
local_action: hipchat
from="deployment"
token={{ hipchat_auth_token }}
room={{ hipchat_room }}
msg="[{{ ansible_hostname }}] Successfully installed RPMs!"
validate_certs="no"
In this case, I use multiple handlers. See following example :
file site.yml
- hosts: all
gather_facts: false
force_handlers: true
vars:
- cmds:
echo: hello
eccho: hello
tasks:
- name: echo
command: "{{ item.key }} {{ item.value }}"
register: command_result
with_dict: "{{ cmds }}"
changed_when: true
failed_when: false
notify:
- "hipchat"
handlers:
- name: "hipchat"
command: "/bin/true"
notify:
- "hipchat_succeeded"
- "hipchat_failed"
- name: "hipchat_succeeded"
debug:
var: "{{ item }}"
with_items: "{{ command_result.results | selectattr('rc', 'equalto', 0) | list }}"
- name: "hipchat_failed"
debug:
var: "{{ item }}"
with_items: "{{ command_result.results | rejectattr('rc', 'equalto', 0) | list }}"
Use command
ansible-playbook -c local -i "localhost," site.yml
CAUTION: use test filter 'equalsto' added in jinja2 2.8

Resources