local_action: shell error concatenate files - ansible

There is such an error in my playbook. Why and how to fix it? (getting the list of updates of remote-hosts, concatenate lists into one file)
- name: Save update_deb_packs in file on ansible-host
copy:
content: "{{ update_deb_packs.stdout }}"
dest: ~/tmp/{{ inventory_hostname }}_update_deb_packs
delegate_to: 127.0.0.1
- name: merge files from different hosts in once
local_action: shell cat ~/tmp/* > ~/tmp/list_update
result:
TASK [Save update_deb_packs in file on ansible-host] *******************************************
changed: [g33-linux -> 127.0.0.1]
changed: [dell-e6410 -> 127.0.0.1]
TASK [merge files from different hosts in once] ********************************
changed: [g33-linux -> localhost]
fatal: [dell-e6410 -> localhost]: FAILED! => {"changed": true, "cmd": "cat ~/tmp/* > ~/tmp/list_update", "delta": "0:00:00.052968", "end": "2017-12-23 13:48:37.029706", "msg": "non-zero return code", "rc": 1, "start": "2017-12-23 13:48:36.976738", "stderr": "cat: /home/alex/tmp/list_update: input and output in one file", "stderr_lines": ["cat: /home/alex/tmp/list_update: input and output in one file"], "stdout": "", "stdout_lines": []}
to retry, use: --limit #/home/alex/.ansible/playbooks/update_deb_list.retry
directory ~/tmp is empty. After shell result file (list_update) file exists and true.

You use a glob ~/tmp/* and one of the files is ~/tmp/list_update, so effectively you order it to read and write to the same file with a redirection and system is refusing to do that.
You also didn't specify run_once, so the merge files from different hosts in once task will run as there are hosts defined for the play.
If you are sure ~/tmp/list_update didn't exist beforehand, adding run_once: true to that task would resolve the problem, but it's far from the best way.
Long story short, you can achieve what you want in one task, for example
with a Jinja2 template:
- copy:
content: "{% for host in ansible_play_hosts %}{{ hostvars[host].update_deb_packs.stdout }}\n{% endfor %}"
dest: ~/tmp/list_update
delegate_to: localhost
run_once: true
You can add separators with host-value to indicate what data is coming from which host:
content: "{% for host in ansible_play_hosts %}From host: {{ host }}\n{{ hostvars[host].update_deb_packs.stdout }}\n{% endfor %}"
using operations on lists:
content: "{{ ansible_play_hosts | map('extract', hostvars, 'update_deb_packs') | map(attribute='stdout') | list }}"

Related

ansible copy module AnsibleUndefined error

I have below task to copy the content of file difference and there is no error but it output "AnsibleUndefined " into destination file
- name: dff tesk
local_action: shell diff -s --unified=0 /src/path/{{ item }} /dest/path/{{ item }}
with_items:
- file1.yml
- file2.yml
register: file_cmp
- name: copy
copy:
content: "{{ file_cmp.results | map(attribute='stdout') | list }}"
dest: "/path/to_dest/file_cmp.log
remote_src: yes
The task do not result in any error and create output file with content
AnsibleUndefined,AnsibleUndefined,AnsibleUndefined,AnsibleUndefined,AnsibleUndefined
The same playbook works and generate correct when executed locally.
"results": [
{
"ansible_loop_var" : "item"
"changed": "false"
"failed": "false"
"failed_when_result":"false"
"item": "fail1.yml"
"module_stderr":"sudo : effective uid is not 0 is /bin/sudo on a file system with the nosuid option set or an NFS file system without root priviledges ?" "msg": "MODULE FAILURE"
"rc": "1"

Getting Ansible runtime error dict object' has no attribute 'stdout_lines' despite variable not null

Below is my playbook which has a variable running_processes which contains a list of pids(one or more)
Next, I read the user ids for each of the pids. All good so far.
I then try to print the list of user ids in curr_user_ids variable using -debug module is when i get the error: 'dict object' has no attribute 'stdout_lines'
I was expecting the curr_user_ids to contain one or more entries as evident from the output shared below.
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ running_processes.stdout }}/loginuid`"
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
- debug: msg="USER ID LIST HERE:{{ curr_user_ids.stdout }}"
with_items: "{{ curr_user_ids.stdout_lines }}"
TASK [Get running processes list from remote host] **********************************************************************************************************
task path: /app/wls/startstop.yml:22
ok: [10.9.9.111] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "ps -few | grep java | grep -v grep | awk '{print $2}'", "delta": "0:00:00.166049", "end": "2019-11-06 11:49:42.298603", "rc": 0, "start": "2019-11-06 11:49:42.132554", "stderr": "", "stderr_lines": [], "stdout": "24032", "stdout_lines": ["24032"]}
TASK [Gather USER IDS of processes id before killing.] ******************************************************************************************************
task path: /app/wls/startstop.yml:59
changed: [10.9.9.111] => (item=24032) => {"ansible_loop_var": "item", "changed": true, "cmd": "id -nu `cat /proc/24032/loginuid`", "delta": "0:00:00.116639", "end": "2019-11-06 11:46:41.205843", "item": "24032", "rc": 0, "start": "2019-11-06 11:46:41.089204", "stderr": "", "stderr_lines": [], "stdout": "user1", "stdout_lines": ["user1"]}
TASK [debug] ************************************************************************************************************************************************
task path: /app/wls/startstop.yml:68
fatal: [10.9.9.111]: FAILED! => {"msg": "'dict object' has no attribute 'stdout_lines'"}
Can you please suggest why am I getting the error and how can I resolve it ?
Few points to note why your solution didn't work.
The task Get running processes list from remote host returns a newline splitted \n string. So you will need to process this and turn the output into a propper list object first.
The task Gather USER IDs from processes id before killing. is returning a dictionary containing the key results where the value is of type list, so you will need iterate over it and fetch for each element the stdout value.
This is how I solved it.
---
- hosts: "localhost"
gather_facts: true
become: true
tasks:
- name: Set default values
set_fact:
process_ids: []
user_names: []
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Register a list of Process ids (Split newline from output before)
set_fact:
process_ids: "{{ running_processes.stdout.split('\n') }}"
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ item }}/loginuid`"
register: curr_user_ids
with_items: "{{ process_ids }}"
- name: Register a list of User names (Out of result from before)
set_fact:
user_names: "{{ user_names + [item.stdout] | unique }}"
when: item.rc == 0
with_items:
- "{{ curr_user_ids.results }}"
- name: Set unique entries in User names list
set_fact:
user_names: "{{ user_names | unique }}"
- name: DEBUG
debug:
msg: "{{ user_names }}"
The variable curr_user_ids registers results of each iteration
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
The list of the results is stored in
curr_user_ids.results
Take a look at the variable
- debug:
var: curr_user_ids
and loop the stdout_lines
- debug:
var: item.stdout_lines
loop: "{{ curr_user_ids.results }}"

Ansible mount module to just check state and not report status

Team,
I am writing a validation task that is supposed to just check if a mount exists or not and report its state from output. so my task is below but it fails and am not sure how to handle it. any hint what adjustments do i need to make?
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
#failed_when: lvp_mount.rc != 0
#ignore_errors: yes
# - debug:
# var: lvp_mount
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }}
{{ lvp_mount.stderr }}
when: lvp_mount | failed
changed: [localhost -> ] => (item=hostA)
[WARNING]: Consider using the mount module rather than running 'mount'. If you
need to use command because mount is insufficient you can add 'warn: false' to
this command task or set 'command_warnings=False' in ansible.cfg to get rid of
this message.
failed: [localhost -> hostA.test.net] (item=hostA) => {"ansible_loop_var": "item", "changed": true, "cmd": "mount | grep sdd", "delta": "0:00:00.009284", "end": "2019-11-06 18:22:56.138007", "failed_when_result": true, "item": "hostA", "msg": "non-zero return code", "rc": 1, "start": "2019-11-06 18:22:56.128723", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [services-pre-install-checks : Report status of mounts] ************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/run_ansible_playbook/k8s/baremetal/roles/services-pre-install-checks/tasks/main.yml': line 265, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Report status of mounts\"\n ^ here\n"}
Your task "Verify LVP Mounts on CPU Nodes for mount_device" is a loop so the register behavior is modified as specified in the documentation.
You can access the various outputs with lvp_mount.results.X.stdout where X is the index.
There is a cleaner way to write your script however. More specifically using:
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
is bad practice. You can accomplish your desired outcome at the play level.
For example:
- hosts: kube-cpu-node # allows you to iterate over all hosts in kube-cpu-node group
tasks:
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
ignore_errors: yes
# notice there is no loop here
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }} # no loop so you can use lvp_mount.stdout
{{ lvp_mount.stderr }} # no loop so you can use lvp_mount.stderr
when: lvp_mount | failed

Ansible slurp module fails with a variable

When I use an Ansible variable with the src option of the slurp module, the slurp module fails.
I'm trying to build an Ansible playbook to copy the SSH public key from each node in the group to every other node in the group. I cannot use the Ansible lookup() function because that can lookup files only on the Ansible server. Instead, I build the path to the id_rsa.pub with the intent of slurp'ing into memory for the authorized_key function.
My problem is that when I specify an Ansible variable for the src for the slurp module, the playbook fails, even though it lists the correct path to the id_rsa.pub file. If I specify the path instead of using the variable, the slurp module works.
Here is my playbook:
# Usage: ansible-playbook copyPublicKey.yaml --limit <GRP> --extra-vars "userid=<userid>"
---
- hosts: all
remote_user: root
vars:
user_id: "{{ userid }}"
tasks:
- name: Determine the path to the public key file
shell: grep "{{ user_id }}" /etc/passwd | cut -d":" -f6
changed_when: false
register: user_home
- set_fact:
rsa_file: "{{ user_home.stdout_lines | to_nice_yaml | replace('\n', '') }}/.ssh/id_rsa.pub"
- debug:
msg: "Public key file - {{ rsa_file }}"
- slurp:
src: "{{ rsa_file }}"
register: public_key
- debug:
msg: "Public key: {{ public_key }}"
The invocation:
ansible-playbook copyPublicKey.yaml --limit DEV --extra-vars "userid=deleteme2"
The output of the slurp module:
TASK: [slurp ] ****************************************************************
failed: [hana-np-11.cisco.com] => {"failed": true}
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
failed: [hana-np-13.cisco.com] => {"failed": true}
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
failed: [hana-np-14.cisco.com] => {"failed": true}
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
failed: [hana-np-15.cisco.com] => {"failed": true}
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
failed: [hana-np-12.cisco.com] => {"failed": true}
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
Yet if I specify the actual path in the slurp module:
- slurp:
src: /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
I get the output I expect:
TASK: [slurp ] ****************************************************************
ok: [hana-np-11.cisco.com]
ok: [hana-np-12.cisco.com]
ok: [hana-np-15.cisco.com]
ok: [hana-np-14.cisco.com]
ok: [hana-np-13.cisco.com]
TASK: [debug ] ****************************************************************
ok: [hana-np-11.cisco.com] => {
"msg": "Public key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBQkl3QUFBUUVBbHgzM0FUdGlLcWlrblQxMWorNjZKSXVFQW1OWWxZcDdCbHIwZXBzaWRuZ3NNYW9pMjNYL1Bjb0EvdnVxYmpxbmZ0Q1YzQmhUdURYQ3BYY0FwNDF5TEF5dlIvOW8xYi9mR2VtZWtlS296ZDh5Smh5VXFMR3IvMmJ6N0N2NFdaOWVqU0dyMFlzWGNjSFNDRmYzNmJreVBPNUg5NUdZdXpGMUV2RzVVcGM3YVNXWEVpM3JWVGJETEhBVC9YTk0veXhRUEMxRjB5Vi8yRkY1WDg4SXU5U0w2TGxrVnhsMUU3VkozTm40UEQrY3RUbGxFeno3enNETWxDbXpzMW5MaHROWnFuSXRZUkhMd21WUk5VcHJvYlpyUm1YMFJVYmIwNFNVbzdBbXpBNnZNcHR1OE1aUURzUGRMckMwYWxPWnZHMHpEUi9ReDlGalh6MVRXMld5WWhZNllRPT0gYW5zaWJsZS1nZW5lcmF0ZWQgb24gaGFuYS1ucC0xMQo=..."
}
ok: [hana-np-12.cisco.com] => {
"msg": "Public key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBQkl3QUFBUUVBd2hPa0FqcEhwbUU4ZEkvemR6d0I1U0htZnlpdXljd2ZmK2lDNW9KaEN4aU5ST0ZKbnVyOFArWno2K2c4Qy8waUdkNGs1ZHIwcE9IY1liWHlMeDNObHhTTWN6RnowZWNSUnMzL1FOOEQzSnBtWlR6T0JaMm1SaG1FY0hGbS9uTkh5eUZyWXlPOHlQNWpqNmxiSUlwU0lMb1BZZGJvM1dxenBGZjhiaDFlVkhRTEo2citVZzNwcUhUeWRzRDZhY3Rtc1ZvWWUvdVV6WExiYkpKbUxxdi9ZeGU4ZW9aUmtONkVqNGtaVDBibDFYUktkM0xTQlZKMHRwa3A1bVgzekxMNGVvWVEzMzMzam1qd2MzU1dWSHVObVl1b1ZsRFEvSzdoR2lFVHd5YUM3VU9hQ29pcEVnUGl5b2o3U1JpNzZCenpxV2hXc2dIbHI0REM3U0p2WFpObk9RPT0gYW5zaWJsZS1nZW5lcmF0ZWQgb24gaGFuYS1ucC0xMgo=..."
}
ok: [hana-np-13.cisco.com] => {
"msg": "Public key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBQkl3QUFBUUVBemFzeitlSW9OSnc2Q3psaVVSR2NQbnMyRkVWbDRtd1RqbDJrWkYxcC9uL0d1b3RPS01vMnR2RmQrN3JmY3YrN2VSZXNtM1lldXJzOXRCQXdSVDdvbXdjckpqUkJiM2Q3UHd3MnM1OTJjb0RjdFo3aE9vL3p4S1FCeWtjaXcvejJ3U1pKWUZKdnE4eFloWkxQNmxnK01uVW1Rd0JROURhREc3MVY2VFc5cFdSM0poYk5BZ2s4bWpBYlEyQk1kK2lWOGxoOUorcWIxbGE3RVVRRGZNaUM3R0ZKVmxJUVlpdm02Q045dGZOdnJGRlNaamZ6MEZKeXhQQWd1VW05d2NUMG9lUDdEQTJTVFNZQ0trQklvRmVuTUp3eFFzNDZSclJSenlBNlErN3I3Q3g5WnpPSlQ0OGgzQnpHQkNwYnhNd0R2L1RMNjRNTDN3UGxXVng5NGZvU2NRPT0gYW5zaWJsZS1nZW5lcmF0ZWQgb24gaGFuYS1ucC0xMwo=..."
}
ok: [hana-np-14.cisco.com] => {
"msg": "Public key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBQkl3QUFBUUVBd3U5V3FaZHBmSjdUZkFhNkc2V05pTkpraWVjWllObjZ4Mno2WWo3eUhsdDBrVWhoYmVXZWR4and0Ukl5d0h2UUZCU2xoVVo2V1Vhc2w0RlZsdzcrOHRGVkVIMkkwTUQ1YjN5UGFrclFhVXdxZHZ6TDEvVXpQWkJjTGU0SENmTGdLNjAxaXdncXBvdUVoRjFEdjEvVHorcTFvU0dmTTFOVjFPOFBGc1pQVjRZbVpoTC9qTVZBNE5MUzFKdXRoS1VmZlo3TmJOSk42SmdWaE14UW8vQXZIZmZvYktDUGJ1VWxDTFE0cTV6VlVTWkxyV1p3VmdncU44MkVnd0xYS00xU0IvOVQ2YmFMVXZhaDVVN0s1QjEzR0pFVXJWek1NNWZ0clpwdEo4T1N3OXUvMHJLVXlzZ0hRcUZVM0ViN0JkVkUxQVdCRXFtaW1XRFdMV3hUclJmZ053PT0gYW5zaWJsZS1nZW5lcmF0ZWQgb24gaGFuYS1ucC0xNAo=..."
}
ok: [hana-np-15.cisco.com] => {
"msg": "Public key: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBQkl3QUFBUUVBdWwxTjlWQkNSU3QrOG5jdTRMVUlBb2hxUEkrWmRlcEtINHlhU1BBZWtETXdkaXpLVHZRSElXdC9iVkpXUzNma3BOYjVuTXFtMkR1eFZnKzBtZmRPTTk1Q2ZsUk00ZUNON05Jb25HQTQrUGVyOXRYdlNrdFU4U0huWERsZVNNa3dybUxnQ1dQN2lwbDRTdGt1SUNGaFh1NzBkOHBEN29IeW9BZVVWWVFuYzRkZldHQStVNU1SdWNSaC9mNWhhS25pN1hpRHZ0alVTaDJHN1RpMTlIdHBvYnlQdmdNSjVnRUt2OXRlWGJ3Qk14YXZicEFiRjJVOTRRTmorKzZOYTZIaWUweS9JQzVtWDRvSmgyb2Z6bGwybjA0MHdtQWRkQS9mY1d1L0IvR3FyOWNDZlhXK0hIUU95MEJoUXNBMk54K3A1RU4rbG1iREg1TUNHTW41Y0RLVEpRPT0gYW5zaWJsZS1nZW5lcmF0ZWQgb24gaGFuYS1ucC0xNQo=..."
}
What am I doing wrong? What don't I know about using Ansible variables?
slurp module fails because you provide it incorrect data -- the error message is:
msg: file not found: - /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub
Rephrasing:
The file named "- /usr/sap/DEV/home/deleteme2/.ssh/id_rsa.pub" cannot be found.
Quite obviously such a file with a hyphen and a space at the beginning does not exist and the error is valid.
The reason for malformed data is unnecessary to_nice_yaml filter on a user_home.stdout_lines list (hyphen is an element marker in YAML).
You can safely remove it and use the following:
---
- hosts: all
remote_user: root
vars:
user_id: "{{ userid }}"
tasks:
- name: Determine the path to the public key file
shell: grep "{{ user_id }}" /etc/passwd | cut -d":" -f6
changed_when: false
register: user_home
- slurp:
src: "{{ user_home.stdout_lines[0] }}/.ssh/id_rsa.pub"
register: public_key
- debug:
msg: "Public key: {{ public_key }}"
Elements of stdout_lines don't have trailing newlines, so replace('\n', '') is unnecessary, but as it is a list, even though south a single element, you need to address only the first element with [0].
Otherwise you could also get the value with user_home.stdout | replace('\n', '') }}/.ssh/id_rsa.pub.
In this case, the issue is related to incorrect file name (as mentioned by techraf).
But just a note about what I have experienced is that slurp also shows the same error "File not found" when the file resides in a directory whose permissions are not allowing ansible user to read content from it. Though, it should print permission related error but it shows instead "File not found" error.

How to set an Ansible variable for all plays/hosts?

This question is NOT answered. Someone mentioned environment variables. Can you elaborate on this?
This seems like a simple problem, but not in ansible. It keeps coming up. Especially in error conditions. I need a global variable. One that I can set when processing one host play, then check at a later time with another host. In a nutshell, so I can branch later in the playbook, depending on the variable.
We have no control over custom software installation, but if it is installed, we have to put different software on other machines. To top it off, the installations vary, depending on the VM folder. My kingdom for a global var.
The scope of variables relates ONLY to the current ansible_hostname. Yes, we have group_vars/all.yml as globals, but we can't set them in a play. If I set a variable, no other host's play/task can see it. I understand the scope of variables, but I want to SET a global variable that can be read throughout all playbook plays.
The actual implementation is unimportant but variable access is (important).
My Question: Is there a way to set a variable that can be checked when running a different task on another host? Something like setGlobalSpaceVar(myvar, true)? I know there isn't any such method, but I'm looking for a work-around. Rephrasing: set a variable in one tasks for one host, then later in another task for another host, read that variable.
The only way I can think of is to change a file on the controller, but that seems bogus.
An example
The following relates to oracle backups and our local executable, but I'm keeping it generic. For below - Yes, I can do a run_once, but that won't answer my question. This variable access problem keeps coming up in different contexts.
I have 4 xyz servers. I have 2 programs that need to be executed, but only on 2 different machines. I don't know which. The settings may be change for different VM environments.
Our programOne is run on the server that has a drive E. I can find which server has drive E using ansible and do the play accordingly whenever I set a variable (driveE_machine). It only applies to that host. For that, the other 3 machines won't have driveE_machine set.
In a later play, I need to execute another program on ONLY one of the other 3 machines. That means I need to set a variable that can be read by the other 2 hosts that didn't run the 2nd program.
I'm not sure how to do it.
Inventory file:
[xyz]
serverxyz[1:4].private.mystuff
Playbook example:
---
- name: stackoverflow variable question
hosts: xyz
gather_facts: no
serial: 1
tasks:
- name: find out who has drive E
win_shell: dir e:\
register: adminPage
ignore_errors: true
# This sets a variable that can only be read for that host
- name: set fact driveE_machine when rc is 0
set_fact:
driveE_machine: "{{inventory_hostname}}"
when: adminPage.rc == 0
- name: run program 1
include: tasks/program1.yml
when: driveE_machine is defined
# program2.yml executes program2 and needs to set some kind of variable
# so this include can only be executed once for the other 3 machines
# (not one that has driveE_machine defined and ???
- name: run program 2
include: tasks/program2.yml
when: driveE_machine is undefined and ???
# please don't say run_once: true - that won't solve my variable access question
Is there a way to set a variable that can be checked when running a task on another host?
No sure what you actually want, but you can set a fact for every host in a play with a single looped task (some simulation of global variable):
playbook.yml
---
- hosts: mytest
gather_facts: no
vars:
tasks:
# Set myvar fact for every host in a play
- set_fact:
myvar: "{{ inventory_hostname }}"
delegate_to: "{{ item }}"
with_items: "{{ play_hosts }}"
run_once: yes
# Ensure that myvar is a name of the first host
- debug:
msg: "{{ myvar }}"
hosts
[mytest]
aaa ansible_connection=local
bbb ansible_connection=local
ccc ansible_connection=local
result
PLAY [mytest] ******************
META: ran handlers
TASK [set_fact] ******************
ok: [aaa -> aaa] => (item=aaa) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "aaa"}
ok: [aaa -> bbb] => (item=bbb) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "bbb"}
ok: [aaa -> ccc] => (item=ccc) => {"ansible_facts": {"myvar": "aaa"}, "ansible_facts_cacheable": false, "changed": false, "failed": false, "item": "ccc"}
TASK [debug] ******************
ok: [aaa] => {
"msg": "aaa"
}
ok: [bbb] => {
"msg": "aaa"
}
ok: [ccc] => {
"msg": "aaa"
}
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#fact-caching
As shown elsewhere in the docs, it is possible for one server to reference variables about another, like so:
{{ hostvars['asdf.example.com']['ansible_os_family'] }}
This even applies to variables set dynamically in playbooks.
This answer doesn't pre-suppose your hostnames, nor how many hosts have a "drive E:". It will select the first one that is reachable that also has a "drive E:". I have no windows boxes, so I fake it with a random coin toss for whether a host does or doesn't; you can of course use your original win_shell task, which I've commented out.
---
- hosts: all
gather_facts: no
# serial: 1
tasks:
# - name: find out who has drive E
# win_shell: dir e:\
# register: adminPage
# ignore_errors: true
- name: "Fake finding hosts with drive E:."
# I don't have hosts with "drive E:", so fake it.
shell: |
if [ $RANDOM -gt 10000 ] ; then
exit 1
else
exit 0
fi
args:
executable: /bin/bash
register: adminPage
failed_when: false
ignore_errors: true
- name: "Dict of hosts with E: drives."
run_once: yes
set_fact:
driveE_status: "{{ dict(ansible_play_hosts_all |
zip(ansible_play_hosts_all |
map('extract', hostvars, ['adminPage', 'rc'] ) | list
))
}}"
- name: "List of hosts with E: drives."
run_once: yes
set_fact:
driveE_havers: "{%- set foo=[] -%}
{%- for dE_s in driveE_status -%}
{%- if driveE_status[dE_s] == 0 -%}
{%- set _ = foo.append( dE_s ) -%}
{%- endif -%}
{%- endfor -%}{{ foo|list }}"
- name: "First host with an E: drive."
run_once: yes
set_fact:
driveE_first: "{%- set foo=[] -%}
{%- for dE_s in driveE_status -%}
{%- if driveE_status[dE_s] == 0 -%}
{%- set _ = foo.append( dE_s ) -%}
{%- endif -%}
{%- endfor -%}{{ foo|list|first }}"
- name: Show me.
run_once: yes
debug:
msg:
- "driveE_status: {{ driveE_status }}"
- "driveE_havers: {{ driveE_havers }}"
- "driveE_first: {{ driveE_first }}"
It's working for me, you can directly use register var no need to use set_fact.
---
- hosts: manager
tasks:
- name: dir name
shell: cd /tmp && pwd
register: homedir
when: "'manager' in group_names"
- hosts: all
tasks:
- name: create a test file
shell: touch "{{hostvars['manager.example.io'['homedir'].stdout}}/t1.txt"
i have used "set_fact" module in the ansible tasks prompt menu, which is helped to pass user input into all the inventory hosts.
#MONITORING PLAYBOOK
- hosts: all
gather_facts: yes
become: yes
tasks:
- pause:
prompt: "\n\nWhich monitoring you want to perform?\n\n--------------------------------------\n\n1. Memory Utilization:\n2. CPU Utilization:\n3. Filesystem Utili
zation:\n4. Exist from Playbook: \n5. Fetch Nmon Report \n \nPlease select option: \n--------------------------------------\n"
register: menu
- set_fact:
option: "{{ menu.user_input }}"
delegate_to: "{{ item }}"
with_items: "{{ play_hosts }}"
run_once: yes
#1 Memory Monitoring
- pause:
prompt: "\n-------------------------------------- \n Enter monitoring Incident Number = "
register: incident_number_result
when: option == "1"
- name: Standardize incident_number variable
set_fact:
incident_number: "{{ incident_number_result.user_input }}"
when: option == "1"
delegate_to: "{{ item }}"
with_items: "{{ play_hosts }}"
run_once: yes
ansible playbook result is
[ansibusr#ansiblemaster monitoring]$ ansible-playbook monitoring.yml
PLAY [all] **************************************************************************
TASK [Gathering Facts] **************************************************************************
ok: [node2]
ok: [node1]
[pause]
Which monitoring you want to perform?
--------------------------------------
1. Memory Utilization:
2. CPU Utilization:
3. Filesystem Utilization:
4. Exist from Playbook:
5. Fetch Nmon Report
Please select option:
--------------------------------------
:
TASK [pause] **************************************************************************
ok: [node1]
TASK [set_fact] **************************************************************************
ok: [node1 -> node1] => (item=node1)
ok: [node1 -> node2] => (item=node2)
[pause]
--------------------------------------
Enter monitoring Incident Number = :
INC123456
TASK [pause] **************************************************************************
ok: [node1]
TASK [Standardize incident_number variable] ****************************************************************************************************************************
ok: [node1 -> node1] => (item=node1)
ok: [node1 -> node2] => (item=node2)

Resources