How to write varibles/hard code values in nested json in ansible? - ansible

I'm trying to create a json file with hard codes valuesas a output in nested json.But the second play is overwriting the first play value.So do we have any best option to do this?
I have tried with to_nice_json template to copy the variable to json file.But not able to keep multiple variable values in imported_var to copy to json file
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: load var from file
include_vars:
file: /tmp/var.json
name: imported_var
- name: Checking mysqld status
shell: service mysqld status
register: mysqld_stat
ignore_errors: true
- name: Checking mysqld status
shell: service httpd status
register: httpd_stat
ignore_errors: true
- name: append mysqld status to output json
set_fact:
imported_var: "{{ imported_var | combine({ 'status_checks':[{'mysqld_status': (mysqld_stat.rc == 0)|ternary('good', 'bad') }]})}}"
# - name: write var to file
# copy:
# content: "{{ imported_var | to_nice_json }}"
# dest: /tmp/final.json
- name: append httpd status to output json
set_fact:
imported_var: "{{ imported_var| combine({ 'status_checks':[{'httpd_status': (httpd_stat.rc == 0)|ternary('good', 'bad') }]})}}"
# - debug:
# var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
Expected result:
{
"status_checks": [
{
"mysqld_status": "good"
"httpd_status": "good"
}
]
}
Actual result:
{
"status_checks": [
{
"httpd_status": "good"
}
]
}

You're trying to perform the sort of data manipulation that Ansible really isn't all that good at. Any time you attempt to modify an existing variable -- especially if you're trying to set a nested value -- you're making life complicated. Having said that, it is possible to do what you want. For example:
---
- hosts: localhost
gather_facts: false
vars:
imported_var: {}
tasks:
- name: Checking sshd status
command: systemctl is-active sshd
register: sshd_stat
ignore_errors: true
- name: Checking httpd status
command: systemctl is-active httpd
register: httpd_stat
ignore_errors: true
- set_fact:
imported_var: "{{ imported_var|combine({'status_checks': []}) }}"
- set_fact:
imported_var: >-
{{ imported_var|combine({'status_checks':
imported_var.status_checks + [{'sshd_status': (sshd_stat.rc == 0)|ternary('good', 'bad')}]}) }}
- set_fact:
imported_var: >-
{{ imported_var|combine({'status_checks':
imported_var.status_checks + [{'httpd_status': (httpd_stat.rc == 0)|ternary('good', 'bad')}]}) }}
- debug:
var: imported_var
On my system (which is running sshd but is not running httpd, this will output:
TASK [debug] **********************************************************************************
ok: [localhost] => {
"imported_var": {
"status_checks": [
{
"sshd_status": "good"
},
{
"httpd_status": "bad"
}
]
}
}
You could dramatically simplify the playbook by restructuring your data. Make status_checks a top level variable, and instead of having it be a list, have it be a dictionary that maps a service name to the corresponding status. Combine this with some loops and you end up with something that is dramatically simpler:
---
- hosts: localhost
gather_facts: false
tasks:
# We can use a loop here instead of writing a separate task
# for each service.
- name: Checking service status
command: systemctl is-active {{ item }}
register: services
ignore_errors: true
loop:
- sshd
- httpd
# Using a loop in the previous task means we can use a loop
# when creating the status_checks variable, which again removes
# a bunch of duplicate code.
- name: set status_checks variable
set_fact:
status_checks: "{{ status_checks|default({})|combine({item.item: (item.rc == 0)|ternary('good', 'bad')}) }}"
loop: "{{ services.results }}"
- debug:
var: status_checks
The above will output:
TASK [debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"status_checks": {
"httpd": "bad",
"sshd": "good"
}
}
If you really want to add this information to your imported_var, you can do that in a single task:
- set_fact:
imported_var: "{{ imported_var|combine({'status_checks': status_checks}) }}"

Related

Ansible - Prevent playbook executing simultaneously

I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...

Ansible when condition registered from csv

I'm using csv file as ingest data for my playbooks, but im having trouble with my when condition. it's either both task will skipped or both task will be ok, my objective is if ansible see the string in when condition it will skipped for the specific instance.
here is my playbook
- name: "Read ingest file from CSV return a list"
community.general.read_csv:
path: sample.csv
register: ingest
- name: debug ingest
debug:
msg: "{{ item.AWS_ACCOUNT }}"
with_items:
- "{{ ingest.list }}"
register: account
- name: debug account
debug:
msg: "{{ account.results | map(attribute='msg') }}"
register: accountlist
- name:
become: yes
become_user: awx
delegate_to: localhost
environment: "{{ proxy_env }}"
block:
- name: "Assume role"
community.aws.sts_assume_role:
role_arn: "{{ item.ROLE_ARN }}"
role_session_name: "pm"
with_items:
- "{{ ingest.list }}"
register: assumed_role
when: "'aws-account-rnd' not in account.results | map(attribute='msg')"
here is the content of sample.csv
HOSTNAME
ENVIRONMENT
AWS_ACCOUNT
ROLE_ARN
test1
dev
aws-account-rnd
arn:aws:iam::XXXX1
test2
uat
aws-account-uat
arn:aws:iam::XXXX2
my objective is to skipped all items in the csv file with aws-acount-rnd
Your condition does not mention item so it will have the same result for all loop items.
Nothing you've shown requires the weird abuse of debug + register that you're doing, and it is in fact getting in your way.
- name: Read CSV file
community.general.read_csv:
path: sample.csv
register: ingest
- name: Assume role
community.aws.sts_assume_role:
role_arn: "{{ item.ROLE_ARN }}"
role_session_name: pm
delegate_to: localhost
become: true
become_user: awx
environment: "{{ proxy_env }}"
loop: "{{ ingest.list }}"
when: item.AWS_ACCOUNT != 'aws-account-rnd'
register: assumed_role
If you'll always only care about one match you can also do this without a loop or condition at all:
- name: Assume role
community.aws.sts_assume_role:
role_arn: "{{ ingest.list | rejectattr('AWS_ACCOUNT', '==', 'aws-account-rnd') | map(attribute='ROLE_ARN') | first }}"
role_session_name: pm
delegate_to: localhost
become: true
become_user: awx
environment: "{{ proxy_env }}"
register: assumed_role
my objective is to skipped all items in the csv file with aws-acount-rnd
The multiple debug you have with register, seems to be a long-winded approach IMHO.
A simple task to debug the Role ARN, only if the account does not match aws-acount-rnd.
- name: show ROLE_ARN when account not equals aws-account-rnd
debug:
var: item['ROLE_ARN']
loop: "{{ ingest.list }}"
when: item['AWS_ACCOUNT'] != 'aws-account-rnd'
This results in:
TASK [show ROLE_ARN when account not equals aws-account-rnd] **********************************************************************************************************************
skipping: [localhost] => (item={'HOSTNAME': 'test1', 'ENVIRONMENT': 'dev', 'AWS_ACCOUNT': 'aws-account-rnd', 'ROLE_ARN': 'arn:aws:iam:XXXX1'})
ok: [localhost] => (item={'HOSTNAME': 'test2', 'ENVIRONMENT': 'uat', 'AWS_ACCOUNT': 'aws-account-uat', 'ROLE_ARN': 'arn:aws:iam:XXXX2'}) => {
"ansible_loop_var": "item",
"item": {
"AWS_ACCOUNT": "aws-account-uat",
"ENVIRONMENT": "uat",
"HOSTNAME": "test2",
"ROLE_ARN": "arn:aws:iam:XXXX2"
},
"item['ROLE_ARN']": "arn:aws:iam:XXXX2"
}
The same logic can be used to pass the item.ROLE_ARN to community.aws.sts_assume_role task.

How to reduce a list against another list of patterns?

I need to write with Ansible's built in filters and tests the similar logic of shell one-liner:
for path in $(find PATH_TO_DIR); do for pattern in $PATTERNS; do echo $path | grep -v $pattern; done; done
---
- hosts: localhost
connection: local
gather_facts: False
vars:
paths:
- "/home/vagrant/.ansible"
- path-one
- path-two
- path-three
- "/home/vagrant/.ssh"
- "/home/vagratn/"
patterns:
- ".*ssh.*"
- ".*ansible.*"
- ".*one.*"
tasks:
- name: set empty list
set_fact:
files_to_be_removed: [ ]
In the end I would like to have a list like this:
ok: [localhost] => {
"msg": [
"path-two",
"path-three",
"/home/vagratn/"
]
}
With this form I getting a list where only last item from patterns is applied.
- set_fact:
files_to_be_removed: |
{{ paths
|reject("search", item)
|list }}
with_items:
- "{{ patterns }}"
The tasks below do the job
- set_fact:
files_to_be_removed: "{{ paths }}"
- set_fact:
files_to_be_removed: "{{ files_to_be_removed|
reject('search', item)|
list }}"
loop: "{{ patterns }}"
- debug:
var: files_to_be_removed
give
"files_to_be_removed": [
"path-two",
"path-three",
"/home/vagratn/"
]

ansible ssh to json_query response values in loop

Team, I have response from json_query which is a dict key:value and i would like to iterate over all values and run ssh command for each value
Below gets me list of all nodes
- name: "Fetch all nodes from clusters using K8s facts"
k8s_facts:
kubeconfig: $WORKSPACE
kind: Node
verify_ssl: no
register: node_list
- debug:
var: node_list | json_query(query)
vars:
query: 'resources[].{node_name: metadata.name, nodeType: metadata.labels.nodeType}'
TASK [3_validations_on_ssh : debug]
ok: [target1] => {
"node_list | json_query(query)": [
{
"nodeType": null,
"node_name": "host1"
},
{
"nodeType": "gpu",
"node_name": "host2"
},
{
"nodeType": "gpu",
"node_name": "host3"
}
]
}
playbook to write: parse node_name and use that in ssh command for all hosts 1-3
- name: "Loop on all nodeNames and ssh."
shell: ssh -F ~/.ssh/ssh_config bouncer#{{ item }}.internal.sshproxy.net "name -a"
register: ssh_result_per_host
failed_when: ssh_result_per_host.rc != 0
with_item: {{ for items in query.node_name }}
- debug:
var: ssh_result_per_host.stdout_lines
error output:
> The offending line appears to be:
failed_when: ssh_result_per_host.rc != 0
with_item: {{ for items in query.node_name }}
^ here
solution 2 also fails when i do loop:
shell: ssh -F ~/.ssh/ssh_config bouncer#{{ item.metadata.name }}.sshproxy.internal.net "name -a"
loop: "{{ node_list.resources }}"
loop_control:
label: "{{ item.metadata.name }}"
output sol 2:
failed: [target1] (item=host1) => {"msg": "Invalid options for debug: shell"}
failed: [target1] (item=host2) => {"msg": "Invalid options for debug: shell"}
fatal: [target1]: FAILED! => {"msg": "All items completed"}
To expand on Ash's correct answer:
- name: "Fetch all nodes from clusters using K8s facts"
k8s_facts:
kubeconfig: $WORKSPACE
kind: Node
verify_ssl: no
register: node_list
- set_fact:
k8s_node_names: '{{ node_list | json_query(query) | map(attribute="node_name") | list }}'
vars:
query: 'resources[].{node_name: metadata.name, nodeType: metadata.labels.nodeType}'
- name: "Loop on all nodeNames and ssh."
shell: ssh -F ~/.ssh/ssh_config bouncer#{{ item }}.internal.sshproxy.net "name -a"
register: ssh_result_per_host
with_items: '{{ k8s_node_names }}'
Separately, unless you quite literally just want to run that one ssh command, the way ansible thinks about that problem is via add_host::
- hosts: localhost
connection: local
gather_facts: no
tasks:
# ... as before, to generate the "k8s_node_names" list
- add_host:
hostname: '{{ item }}.internal.sshproxy.net'
groups:
- the_k8s_nodes
ansible_ssh_username: bouncer
# whatever other per-host variables you want
with_items: '{{ k8s_node_names }}'
# now, run the playbook against those nodes
- hosts: the_k8s_nodes
tasks:
- name: run "name -a" on the host
command: name -a
register: whatever
This is a contrived example, because if you actually wanted just to get the list of kubernetes nodes and use those in a playbook, you use would a dynamic inventory script
It is not with_item it is with_items
You cannot use with_item: {{ for items in query.node_name }}
Save the values to a variable and use the variable in with_items: {{ new_variable }}

iteration using with_items and register

Looking for help with a problem I've been struggling with for a few hours. I want to iterate over a list, run a command, register the output for each command and then iterate with debug over each unique registers {{ someregister }}.stdout
For example, the following code will spit out "msg": "1" and "msg": "2"
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: result
with_items: "{{ numbers }}"
- debug: msg={{ item.stdout }}
with_items: "{{ result.results }}"
If however, I try and capture the output of a command in a register variable that is named using with_list, I am having trouble accessing the list or the elements within it. For example, altering the code slightly to:
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: "{{ item.name }}"
with_items: "{{ numbers }}"
- debug: var={{ item.name.stdout }}
with_items: "{{ numbers }}"
Gives me:
TASK [debug]
> ******************************************************************* fatal: [localhost]: FAILED! => {"failed": true, "msg": "'unicode
> object' has no attribute 'stdout'"}
Is it not possible to dynamically name the register the output of a command which can then be called later on in the play? I would like each iteration of the command and its subsequent register name to be accessed uniquely, e.g, given the last example I would expect there to be variables registered called "first" and "second" but there aren't.
Taking away the with_items from the debug stanza, and just explicitly defining the var or message using first.stdout returns "undefined".
Ansible version is 2.0.2.0 on Centos 7_2.
Thanks in advance.
OK so I found a post on stackoverflow that helped me better understand what is going on here and how to access the elements in result.results.
The resultant code I ended up with was:
---
- hosts: localhost
gather_facts: false
vars:
numbers:
- name: "first"
int: "1"
- name: "second"
int: "2"
tasks:
- name: Register output
command: "/bin/echo {{ item.int }}"
register: echo_out
with_items: "{{ numbers }}"
- debug: msg="item.item={{item.item.name}}, item.stdout={{item.stdout}}"
with_items: "{{ echo_out.results }}"
Which gave me the desired result:
"msg": "item.item=first, item.stdout=1"
"msg": "item.item=second, item.stdout=2"
I am not sure if I understand the question correctly, but maybe this can help:
- debug: msg="{{ item.stdout }}"
with_items: echo_out.results
Please note that Ansible will print each item and the msg both - so you need to look carefully for a line that looks like "msg": "2".

Resources