I have following prod.yaml file
configMap:
data:
env:
APP_1: "{{ .Data.data.app_1 }}"
APP_2: "{{ .Data.data.app_2 }}"
APP_3: "{{ .Data.data.app_3 }}"
APP_4: "{{ .Data.data.app_4 }}"
and updated.yaml file:
APP_1:
APP_2:
APP_3:
APP_4:
APP_5:
LOG_DIR:
The expected result is:
configMap:
data:
env:
APP_1: "{{ .Data.data.app_1 }}"
APP_2: "{{ .Data.data.app_2 }}"
APP_3: "{{ .Data.data.app_3 }}"
APP_4: "{{ .Data.data.app_4 }}"
APP_5:
LOG_DIR:
I am using awk to format the fields new data fields
cat .env | awk -F":" '{print $1": \"{{ .Data.data."tolower($1)" }}\""}' > updated.yaml
And awk to merge the new fields
yq '.env[] *=n [load("update.yaml")]' prod.yaml > a.yaml
But I cannot update the prod.yaml file...
I am reading this documentation yq doc
With mikefarah/yq that you are using, its pretty close to what you have. Remember that .env is a !!map type and not a !!seq, so do the recursive merge as the map type, i.e.
yq '.configMap.data.env *=n load("update.yaml")' prod.yaml
The above does not modify the file in-place. If you are satisfied with the contents of stdout, you could use the -i flag to do in-place substitution of the file
Related
I've got an Ansible playbook that pulls interface descriptions from two routers and writes the results to a CSV file. When it iterates through the interfaces it writes one interface per router to the file
---
- name: Cisco get ints
hosts: test
gather_facts: false
connection: local
become: false
vars:
csv_path: /tmp
csv_filename: int_audit.csv
headers: Hostname,Interface,Description
tasks:
- name: Save CSV headers
ansible.builtin.lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ headers }}"
create: true
state: present
delegate_to: localhost
run_once: true
- name: run show inventory on remote device
iosxr_facts:
gather_subset: interfaces
register: output
- name: Write int desc to csv file
loop: "{{ output.ansible_facts.ansible_net_interfaces | dict2items }}"
lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ output.ansible_facts.ansible_net_hostname }},{{ item.key }},{{ item.value.description }}"
create: true
state: present
delegate_to: localhost
so I end up with a list that has no order.
$ cat /tmp/int_audit.csv
Hostname,Interface,Description
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
I'd like to have it sort the list by router name.
Any help is appreciated.
You could in example achieve you goal by simply post-processing the file on the Control Node.
For the test file
cat test.csv
Hostname,Interface,Description
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
the sort command will result into
sort -k1 -n -t, test.csv
Hostname,Interface,Description
RTR1.LAB1,Bundle-Ether1100.128,LOCAL:RTR2.LAB1:BUNDLE1100:20GE[UTIL]
RTR1.LAB1,BVI13,LOCAL:RTR2.LAB1:[L3]
RTR1.LAB1,Loopback0,LOOP:LOOP0-RTR1.LAB1:[N/A]
RTR2.LAB1,Bundle-Ether1100.128,LOCAL:RTR1.LAB1:BUNDLE1100:20GE[UTIL]
RTR2.LAB1,Loopback0,LOOP:LOOP0-RTR2.LAB1:[N\A]
Similar Q&A
Sort CSV file based on first column
How to sort CSV by specific column
and more
Sort CSV file by multiple columns using the sort command
Thanks all, I've written a perl script (which I call in ansible) to do the sort after it's stored in the csv file
I have this file and I know the value of LOC for example /oracle/19.0.0.
Would like to get the value of HOME NAME=, the corresponding value would be OraDB19Home1.
Looked at lookup but unable to get it fully working. Appreciate any help.
<?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?>
<!-- Copyright (c) 1999, 2022, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>13.9.4.0.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraHome1" LOC="/oracle/agent/agent13.4" TYPE="O" IDX="3"/>
<HOME NAME="OraDB19Home1" LOC="/oracle/19.0.0" TYPE="O" IDX="2"/>
</HOME_LIST>
</INVENTORY>
Given the XML
shell> cat inventory.xml
<?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?>
<!-- Copyright (c) 1999, 2022, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>13.9.4.0.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraHome1" LOC="/oracle/agent/agent13.4" TYPE="O" IDX="3"/>
<HOME NAME="OraDB19Home1" LOC="/oracle/19.0.0" TYPE="O" IDX="2"/>
</HOME_LIST>
</INVENTORY>
Read the file and convert XML to YAML
inv_xml: "{{ lookup('file', 'inventory.xml') }}"
inv_yml: "{{ inv_xml|ansible.utils.from_xml }}"
gives
inv_yml:
INVENTORY:
HOME_LIST:
HOME:
- '#IDX': '3'
'#LOC': /oracle/agent/agent13.4
'#NAME': OraHome1
'#TYPE': O
- '#IDX': '2'
'#LOC': /oracle/19.0.0
'#NAME': OraDB19Home1
'#TYPE': O
VERSION_INFO:
MINIMUM_VER: 2.1.0.6.0
SAVED_WITH: 13.9.4.0.0
Create a dictionary of LOC and NAME
loc_name: "{{ inv_yml.INVENTORY.HOME_LIST.HOME|
items2dict(key_name='#LOC',
value_name='#NAME') }}"
gives
loc_name:
/oracle/19.0.0: OraDB19Home1
/oracle/agent/agent13.4: OraHome1
Then, searching is trivial
loc: '/oracle/19.0.0'
name_of_loc: "{{ loc_name[loc] }}"
gives
name_of_loc: OraDB19Home1
, or in the loop
- debug:
msg: "The name of LOC {{ item }} is {{ loc_name[item] }}"
loop:
- '/oracle/19.0.0'
- '/oracle/agent/agent13.4'
gives (abridged)
msg: The name of LOC /oracle/19.0.0 is OraDB19Home1
msg: The name of LOC /oracle/agent/agent13.4 is OraHome1
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: localhost
vars:
inv_xml: "{{ lookup('file', 'inventory.xml') }}"
inv_yml: "{{ inv_xml|ansible.utils.from_xml }}"
loc_name: "{{ inv_yml.INVENTORY.HOME_LIST.HOME|
items2dict(key_name='#LOC',
value_name='#NAME') }}"
loc: '/oracle/19.0.0'
name_of_loc: "{{ loc_name[loc] }}"
tasks:
- debug:
var: inv_xml
- debug:
var: inv_yml
- debug:
var: loc_name
- debug:
var: name_of_loc
- debug:
msg: "The name of LOC {{ item }} is {{ loc_name[item] }}"
loop:
- '/oracle/19.0.0'
- '/oracle/agent/agent13.4'
Example of the project
shell> tree .
.
├── ansible.cfg
├── hosts
├── inventory.xml
└── pb.yml
0 directories, 4 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
collections_path = $HOME/.local/lib/python3.9/site-packages/
inventory = $PWD/hosts
roles_path = $PWD/roles
remote_tmp = ~/.ansible/tmp
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
localhost
Q: "Give an alternative to ansible.utils"
A: Install jc and use it in the pipe. The declaration below expands to the same YAML as before
inv_yml: "{{ lookup('pipe', 'cat inventory.xml | jc --xml') }}"
Q: "Using possible regex?"
A: Select the line first
inv_xml: "{{ lookup('file', 'inventory.xml') }}"
loc: '/oracle/19.0.0'
home_loc_regex: '^\s*<HOME .*? LOC="{{ loc }}" .*$'
home: "{{ inv_xml.splitlines()|
select('regex', home_loc_regex)|
first|
trim }}"
gives
home: <HOME NAME="OraDB19Home1" LOC="/oracle/19.0.0" TYPE="O" IDX="2"/>
Parse the attributes
home_dict: "{{ dict(home[6:-2]|
replace('\"', '')|
split(' ')|
map('split', '=')) }}"
gives
home_dict:
IDX: '2'
LOC: /oracle/19.0.0
NAME: OraDB19Home1
TYPE: O
Q: "No filter named 'split'"
A: The filter split is available since 2.11. For the lower versions, only the '.split' method is available. In this case, use Jinja and create the YAML structure. The declarations below give the same dictionary home_dict as before
home_dict_str: |
{% for i in home[6:-2].split(' ') %}
{% set arr = i.split('=') %}
{{ arr.0 }}: {{ arr.1 }}
{% endfor %}
home_dict: "{{ home_dict_str|from_yaml }}"
This is almost trivial to do with awk:
Formatted as a script file:
/<HOME NAME=.*TYPE=.*IDX=/ {
if ($3 == "LOC=\"/oracle/19.0.0\"") {
split($2, a, /"/);
print a[2];
exit;
}
}'
Command line:
cat your_input_file | awk '/<HOME NAME=.*TYPE=.*IDX=/{ if ($3 == "LOC=\"/oracle/19.0.0\"") { split($2, a, /"/); print a[2]; exit } }'
You can probably convert this relatively easily to something similar in python.
awk -v loc='LOC="/oracle/19.0.0"' '
index($0,loc){
print gensub(/^.*NAME="([^"]*)".*$/,"\\1",1)
}
' inputfile
or
awk -F'=|"' -v loc='LOC="/oracle/19.0.0"' 'index($0,loc){print $3}' inputfile
awk -F'=|"' -v loc="/oracle/19.0.0" '/ LOC="[^"]*" / && $6 == loc {print $3}' inputfile
Output
OraDB19Home1
For me the use case looks like a
Similar Q&A
Ansible: How to pull a specific string out of the contents of a file?
It is assumed that the unfavorable data structure within your configuration example file stays as it is provided and you are looking for a grep approach in Ansible.
- name: Gather home directory
shell:
cmd: "grep '{{ VERSION }}' inventory.xml | cut -d '=' -f 2 | cut -d ' ' -f 1 | tr -d '"'"
register: home_dir
How to proceed further?
If such lookup's are necessary frequently or on a greater scale, one might take advantage from implementing a specific solution.
If the installation of ansible.utils.collection and Python library xmltodict is possible or already available, you should definitely go with the recommended solution from Vladimir Botka of from_xml filter – Convert given XML string to native python dictionary by converting XML to YAML before further processing.
Otherwise you could give a try for
Custom Module Examples
Bash: File search ... with Ansible
Bash: How to check if a file is of type human-readable in Ansible?
Python: How to search for a string in a Remote File using Ansible?
I have data in 3 variable , I am able to dump this in file excel but it comes under Single Column instead of Multiple Columns, Any idea how can I get in Multiple Column
- name: Add mappings to /etc/hosts
lineinfile:
insertafter: EOF
dest: ~/test.xlxs
line: "\t {{ item.0 }} \t\t {{ item.1 }} \t\t {{ item.2 }}"
with_together:
- "{{ Test1 }}"
- "{{ Test2 }}"
- "{{ Test3 }}"
output of above :
Column1
a b c
Expected Output :
Column1 Column Colum3
a b c
Your task will produce a TSV file, which can be imported into Excel by changing the delimiter to Tab in Data >> Text to Columns. You can refer this link for the steps.
Else, you can create a CSV file which Excel should not have any trouble importing into its cells.
Just change the \t to , in your task,
- name: Add mappings to /etc/hosts
lineinfile:
insertafter: EOF
dest: ~/test.csv
line: "{{ item.0 }},{{ item.1 }},{{ item.2 }}"
with_together:
- "{{ Test1 }}"
- "{{ Test2 }}"
- "{{ Test3 }}"
I have this file in my remote host:
$ cat /etc/default/locale
LANG=pt_PT.UTF-8
LANGUAGE=en_US.UTF-8
How can I read that and import hose key=value pairs into variables to use in the following tasks?
Fetch the file and put it, for example, into the inventory_dir
- set_fact:
my_fetch_file: "{{ inventory_dir ~
'/' ~
inventory_hostname ~
'-locale' }}"
- fetch:
flat: true
src: /etc/default/locale
dest: "{{ my_fetch_file }}"
Use ini loookup plugin to read the values
- set_fact:
my_LANG: "{{ lookup('ini',
'LANG type=properties file=' ~
my_fetch_file) }}"
It is possible to read a list of variables into a dictionary. For example
- set_fact:
my_vars: "{{ my_vars|default({})|
combine({item:
lookup('ini',
item ~
' type=properties file=' ~
my_fetch_file)}) }}"
loop: [LANG, LANGUAGE]
Then the debug below should print the values
- debug:
var: my_vars[item]
loop: [LANG, LANGUAGE]
Q: can you please clarify all those ~?
A: Quoting from Math
+ Adds two objects together. Usually the objects are numbers, but if both are strings or lists, you can concatenate them this way. This, however, is not the preferred way to concatenate strings! For string concatenation, have a look-see at the ~ operator. {{ 1 + 1 }} is 2.
Quoting from Other Operators
~ Converts all operands into strings and concatenates them.
{{ "Hello " ~ name ~ "!" }} would return (assuming name is set to 'John') Hello John!.
you can use setup module and fact.d for gathering custom facts from remote host:
- name: "Create custom fact directory"
file:
path: "/etc/ansible/facts.d"
state: "directory"
- name: "coping the custom fact file"
copy:
remote_src: yes
src: /etc/default/locale
dest: /etc/ansible/facts.d/locale.fact
mode: '755'
- name: "gathering custom facts"
setup:
filter: ansible_local
- debug:
var: ansible_local.locale
- name: "remove the custom fact directory"
file:
path: "/etc/ansible/facts.d"
state: absent
first line: /u01/app/oracle/oradata/TEST/
second line: /u02/
How to read both lines in a same variable and by using same varible i want know the present working directory through shell commands in ansible
You can use command to read a file from disk
- name: Read a file into a variable
command: cat /path/to/your/file
register: my_variable
And then do something like below to loop over the lines in the file.
- debug: msg="line: {{ item }}"
loop: y_variable.stdout_lines
The task below creates the list of the lines from a file
- set_fact:
lines_list: "{{ lines_list|default([]) + [item] }}"
with_lines: cat /path/to/file
It's possible to create both a list
"lines_list": [
"/u01/app/oracle/oradata/TEST/",
"/u02/"
]
and a dictionary
"lines_dict": {
"0": "/u01/app/oracle/oradata/TEST/",
"1": "/u02/"
}
with the combine filter
- set_fact:
lines_dict: "{{ lines_dict|default({})|combine({idx: item}) }}"
with_lines: cat /path/to/file
loop_control:
index_var: idx
"Present working directory through shell commands in ansible" can be printed from the registered variable. For example
- command: echo $PWD
register: result
- debug:
var: result.stdout
(not tested)