Ansible: Sort output - sorting

I've got an Ansible playbook that pulls interface descriptions from two routers and writes the results to a CSV file. When it iterates through the interfaces it writes one interface per router to the file
---
- name: Cisco get ints
hosts: test
gather_facts: false
connection: local
become: false
vars:
csv_path: /tmp
csv_filename: int_audit.csv
headers: Hostname,Interface,Description
tasks:
- name: Save CSV headers
ansible.builtin.lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ headers }}"
create: true
state: present
delegate_to: localhost
run_once: true
- name: run show inventory on remote device
iosxr_facts:
gather_subset: interfaces
register: output
- name: Write int desc to csv file
loop: "{{ output.ansible_facts.ansible_net_interfaces | dict2items }}"
lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ output.ansible_facts.ansible_net_hostname }},{{ item.key }},{{ item.value.description }}"
create: true
state: present
delegate_to: localhost
so I end up with a list that has no order.
$ cat /tmp/int_audit.csv
Hostname,Interface,Description
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
I'd like to have it sort the list by router name.
Any help is appreciated.

You could in example achieve you goal by simply post-processing the file on the Control Node.
For the test file
cat test.csv
Hostname,Interface,Description
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
the sort command will result into
sort -k1 -n -t, test.csv
Hostname,Interface,Description
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
Similar Q&A
Sort CSV file based on first column
How to sort CSV by specific column
and more
Sort CSV file by multiple columns using the sort command

Thanks all, I've written a perl script (which I call in ansible) to do the sort after it's stored in the csv file

Related

Can I copy text from remote to local with Ansible playbook?

I know the Ansible fetch-module can copy a file from remote to local, but what if I only need the contents (in my case a tmp file holding the ip address) appended into a local file?
Fetch module does this:
- name: Store file into /tmp/fetched/
ansible.builtin.fetch:
src: /tmp/somefile
dest: /tmp/fetched
I need it to do something like this:
- name: Store file into /tmp/fetched/
ansible.builtin.fetch:
src: /tmp/somefile.txt
dest: cat src >> /tmp/fetched.txt
In a nutshell:
- name: Get remote file content
ansible.builtin.slurp:
src: /tmp/somefile.txt
register: somefile
- name: Append remote file content to a local file
vars:
target_file: /tmp/fetched.txt
ansible.builtin.copy:
content: |-
{{ lookup('file', target_file) }}
{{ somefile.content | b64decode }}
dest: "{{ target_file }}"
# Fix write concurrency when running on multiple targets
throttle: 1
delegate_to: localhost
Notes:
the second task isn't idempotent (will modify the file on each run even with the same content to append)
this will work for small target files. If that file becomes huge and you experience high execution times / memory consumptions, you might want to switch to shell for the second task:
- name: Append remote file content to a local file
ansible.builtin.shell:
cmd: echo "{{ somefile.content | b64decode }}" >> /tmp/fetched
# You might still want to avoid concurrency with multiple targets
throttle: 1
delegate_to: localhost
Alternatively, you could write all contents from all fetched files from all your targets in one go to avoid the concurrency problem and gain some time.
# Copy solution
- name: Append remote files contents to a local file
vars:
target_file: /tmp/fetched.txt
fetched_content: "{{ ansible_play_hosts
| map('extract', hostvars, 'somefile.content')
| map('b64decode')
| join('\n') }}"
ansible.builtin.copy:
content: |-
{{ lookup('file', target_file) }}
{{ fetched_content }}
dest: "{{ target_file }}"
delegate_to: localhost
run_once: true
# Shell solution
- name: Append remote files contents to a local file
vars:
fetched_content: "{{ ansible_play_hosts
| map('extract', hostvars, 'somefile.content')
| map('b64decode')
| join('\n') }}"
ansible.builtin.shell:
cmd: echo "{{ fetched_content }}" >> /tmp/fetched
delegate_to: localhost
run_once: true

Ansible trim string

I'm trying write role to install MySql 8, and get problem with this:
- name: Extract root password from logs into {{ mysql_root_old_password }} variable
ansible.builtin.slurp:
src: "{{ mysql_logfile_path }}"
register: mysql_root_old_password
#when: "'mysql' in ansible_facts.packages"
- name: Extract root password from logs into {{ mysql_root_old_password }} variable
set_fact:
mysql_root_old_password: "{{ mysql_root_old_password.content | b64decode | regex_findall('generated for root#localhost: (.*)$', 'multiline=true') }}"
#when: "'mysqld' in ansible_facts.packages"
- name: Get Server template
ansible.builtin.template:
src: "{{ item.name }}.j2"
dest: "{{ item.path }}"
loop:
- { name: "my.cnf", path: "/root/.my.cnf" }
notify:
- Restart mysqld
on the .my.cnf I get password with quotes and brackets:
[client]
user=root
password=['th6k(gZeJSt4']
How to trim that?
What I try:
- name: trim password
set_fact:
mysql_root_old_password2: "{{ mysql_root_old_password | regex_findall('[a-zA-Z0-9,()!##$%^&*]{12}')}}"
Thanks.
The result of regex_findall is a list because there might be more matches. Take the last item
- set_fact:
mysql_root_old_password: "{{ mysql_root_old_password.content|
b64decode|
regex_findall('generated for root#localhost: (.*)$', 'multiline=true')|
last }}"
From your description
on the .my.cnf I get password with quotes and brackets ... How to trim that
I understand that you like to read a INI file like my.cnf.ini
[client]
user=root
password=['A1234567890B']
where the value of the key password looks like a list with one element in YAML and the structure doesn't change, but you are interested in the value without leading and trailing square brackets and single quotes only.
To do so there are several possibilities.
Via Ansible Lookup plugins
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Extract root password from INI file
debug:
msg: "{{ lookup('ini', 'password section=client file=my.cnf.ini') }}"
register: result
- name: Show result with type
debug:
msg:
- "{{ result.msg }}"
- "result.msg is of type {{ result.msg | type_debug }}"
- "Show password only {{ result.msg[0] }}" # the first list element
Such approach will work on the Control Node.
Like all templating, lookups execute and are evaluated on the Ansible control machine.
Further Q&A
How to read a line from a file into an Ansible variable
What is the difference between .ini and .conf?
Further Documentation
ini lookup – read data from an INI file
Via Ansible shell module, sed and cut.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Extract root password from INI file
shell:
cmd: echo $(sed -n 's/^password=//p' my.cnf.ini | cut -d "'" -f 2)
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
Please take note regarding "I get password with quotes and brackets ... ['<pass>'] ... How to trim that?" that from perspective of the SQL service, .cnf file, [' and '] are actually part of the password!
- name: Server template
template:
src: "my.cnf.ini"
dest: "/root/.my.cnf"
If that is correct they will be necessary for proper function of the service.

Ansible how to sort output of command from multiple nodes

I have an Ansible playbook that runs some command on all my nodes to generate a list of
available images. Then on my ansible controller, I want to generate a combined, sorted
list with all duplicates removed.
I have a playbook:
- hosts: kube_cluster
tasks:
- command: crictl images
become: yes
register: cri_images
- name: save results
delegate_to: localhost
run_once: yes
become: no
copy:
dest: cri_images.txt
content: |
{% for h in ansible_play_hosts %}
{{ hostvars[h].cri_images.stdout }}
{% endfor %}
that does generate the file, but I fail to get the sorting of the combined output.
I would get, for each of the nodes, output like:
docker.io/bitnami/contour 1.17.1-debian-10-r0 5be5c048ac5e2 132MB
docker.io/bitnami/envoy 1.17.3-debian-10-r62 7ff8d931d11c7 150MB
docker.io/calico/cni v3.19.1 5749e8b276f9b 146MB
docker.io/calico/node v3.19.1 c4d75af7e098e 171MB
available in the 'stdout_lines'. Some lines are duplicated across some of the nodes.
I fail to adapt the jinja code to sort, and remove duplicates. Probably need some form of nested loop, or a means to flatten the the hostvars[h].cri_images.stdout.
But I could not figure out how to do that. Most I got were syntax/template errors.
As an extra, I'd also like to:
remove the header line of the output
have the option to select only few of the columns of the combined output.
In a shell script this could be achieved easily by a
cat cri_images.txt | grep -v ^IMAGES | awk '{print $1 ":" $2}' | sort | uniq
but requirement is that it would have to run in Ansible.
anyone have a hint?
thx.
Q: "Some command on all my nodes to generate a list of available images ... generate a combined, sorted list with all duplicates removed."
A: For testing, let's split the hostnames, instead of generating the lists of available images, e.g.
- hosts: test_11,test_12,test_13
tasks:
- set_fact:
cri_images: "{{ {}|combine({'stdout_lines':
inventory_hostname.split('_')}) }}"
- debug:
var: cri_images.stdout_lines
gives
ok: [test_12] =>
cri_images.stdout_lines:
- test
- '12'
ok: [test_11] =>
cri_images.stdout_lines:
- test
- '11'
ok: [test_13] =>
cri_images.stdout_lines:
- test
- '13'
Then, generate a combined, sorted list with all duplicates removed
- copy:
dest: cri_images.txt
content: "{{ _content | join('\n') }}"
vars:
_content: "{{ ansible_play_hosts|
map('extract', hostvars, ['cri_images', 'stdout_lines'])|
flatten|unique|sort }}"
delegate_to: localhost
run_once: true
gives
shell> cat cri_images.txt
11
12
13
test

Ansible task to strip out data from a file

A file has the following contents
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
I need to fetch a43019cc-d4a4-4acb-83dd-defd76443c6a and write it to a variable using an Ansible task. This value need to be passed to other tasks in the same Ansible file.
Can someone show me the required task to achieve this.
If your file is on the controller, you can use the file lookup to get its content.
If the file is on the node, you will have to use something like the slurp module.
Then, when you have the file content, you can use the regex_search filter to extract your required text.
With the file on the controller:
- set_fact:
com_dkr_container_id: >-
{{
lookup('file', '/path/to/file')
| regex_search('com\.dkr\.container\.id=(.*)', '\1')
| first
}}
With the file on the node(s):
- slurp:
src: /path/to/file
register: file_content
- set_fact:
com_dkr_container_id: >-
{{
file_content.content
| b64decode
| regex_search('com\.dkr\.container\.id=(.*)', '\1')
| first
}}
This is the job for the ini lookup plugin. See
shell> ansible-doc -t lookup ini
For example, given the file
shell> cat container.properties
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
The playbook
- hosts: localhost
tasks:
- set_fact:
id: "{{ lookup('ini', 'com.dkr.container.id
type=properties
file=container.properties') }}"
- debug:
var: id
gives
id: a43019cc-d4a4-4acb-83dd-defd76443c6a
The lookup plugins work on the controller only. If the file is at the remote host fetch it, e.g. given the file
shell> ssh admin#test_11 cat container.properties
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
The playbook
- hosts: test_11
tasks:
- fetch:
src: container.properties
dest: /tmp/fetched
- set_fact:
id: "{{ lookup('ini', 'com.dkr.container.id
type=properties
file=/tmp/fetched/{{ inventory_hostname }}/container.properties') }}"
- debug:
var: id
gives the same result
id: a43019cc-d4a4-4acb-83dd-defd76443c6a
The playbook above is idempotent. The file will be stored at the controller
shell> tree /tmp/fetched/
/tmp/fetched/
└── test_11
└── container.properties

Ansible access same variables from multiple Json files

I have multiple .json files on local host where I place my playbook:
json-file-path/{{ testName }}.json
{{ testName }}.json are: testA.json, testB.json, testC.json ... etc.
All .json files have same keys with different values like this:
json-file-path/testA.json:
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
json-file-path/testB.json:
{
“a_key”: “a_value2”
“b_key”: “b_value2”
}
json-file-path/testC.json:
{
“a_key”: “a_value3”
“b_key”: “b_value3”
}
.....
I need to access the key-value variables from all .json files and if the values meet some condition, I will perform some task in target host. For example, I have:
a_value1=3
a_value2=4
a_value3=1
I go through my .json file one by one, if a_key[value]>3, I will copy this .json file to target host, otherwise skip the task. In this case, I will only copy testC.json to target host.
How would I achieve this? I was thinking of re-constructing my .json files using {{ testName }} as dynamic key of dict like this:
{
“testName”: “testA”
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
So I can access my variable as {{ testName}}.a_key. So far I haven’t been able to achieve this.
I have tried the following in my playbook:
—-
- host: localhost
tasks:
- name: construct json files
vars:
my_vars:
a_key: “{{ a_value }}”
b_key: “{{ b_value }}”
with_dict: “{{ testName }}”
copy:
content: “{{ my_vars | to_nice_json }}”
dest: /json-file-path/{{ testName }}.json
My updated playbook are:
/mypath/tmp/include.yaml:
—-
- hosts: remote_hostName
tasks:
- name: load json files
set_fact:
json_data: “{{ lookup(‘file’, item) | from_json }}”
- name: copy json file if condition meets
copy:
src: “{{ item }}”
dest: “{{ /remote_host_path/tmp}}/{{item | basename }}”
delegate_to: “{{ remote_hostName }}”
when: json_data.a_key|int>5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_ dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}/*json”
All json files on localhost under /mypath/tmp/.
Latest version of playbook. It is working now:
/mypath/tmp/include.yaml:
—-
- name: loafing json flies
include_vars:
file: “{{ item }}”
name: json_data
- name: copy json file to remote if condition meets
copy:
src: “{{ item }}”
dest: ‘/remote_host_path/tmp/{{item | basename}}’
delegate_to: “{{ remote_host }}”
when: json_data.a_key > 5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping json files
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}”/*json”
I am hoping that I have understood your requirements correctly, and that this helps move you forward.
Fundamentally, you can load each of the JSON files so you can query the values as native Ansible variables. Therefore you can loop through all the files, read each one, compare the value you are interested in and then conditionally copy to your remote host via a delegated task. Therefore, give this a try:
Create an include file include.yaml:
---
# 'item' contains a path to a local JSON file on each pass of the loop
- name: Load the json file
set_fact:
json_data: "{{ lookup('file', item) | from_json }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: json_data.a_key > value_threshold
then in your playbook:
---
- hosts: localhost
connection: local
# Set some example vars, tho these could be placed in a variety of places
vars:
local_src_dir: /some/local/path
remote_host: <some_inventory_hostname>
remote_dest_dir: /some/remote/path
value_threshold: 3
tasks:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
loop: "{{ lookup('fileglob', local_src_dir + '/*json').split(',') }}"
Note: As you are running an old version of Ansible, you may need older alternate syntax for all of this to work:
In your include file:
- name: Load the json file
set_fact:
include_vars: "{{ item }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: a_key > value_threshold
and in your playbook:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
with_fileglob:
- "{{ local_src_dir }}/*json"

Resources