My intention with below playbook is to run a shell command only when it finds any of the value from disk_list. I need help to frame out when condition as told above.
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- set_fact:
disk_list:
- sda
- sdb
- sdc
- name: Get df -hT output to know XFS file system
shell: df -hT |grep xfs|awk '{print $1}'
register: df_result
- name: Run shell command on each XFS file system
shell: ls -l {{ item }} | awk '{print $1}'
with_items: "{{ df_result.stdout_lines }}"
when: "{{ disk_list.[] }} in {{ item }}"
BTW, in my system, "df_result" variable looks as below:
TASK [debug] ***************************************************************************************************************************************
ok: [localhost] => {
"df_result": {
"changed": true,
"cmd": "df -hT |grep xfs|awk '{print $1}'",
"delta": "0:00:00.017588",
"end": "2019-03-01 23:55:21.318871",
"failed": false,
"rc": 0,
"start": "2019-03-01 23:55:21.301283",
"stderr": "",
"stderr_lines": [],
"stdout": "/dev/sda3\n/dev/sda1",
"stdout_lines": [
"/dev/sda3",
"/dev/sda1"
]
}
}
Please help !
Some notes on the playbook.
For localhost, connection: local is implicit. So there is no technical need to specify this.
set_fact is correct, but in this case I believe is more appropiate to use "vars" at a play level instead of the set_fact module in a task. This also allows you to use "vars_files" for easier use of disk device lists.
I would reccomend to try to keep your task names "easy", as you may want to use the --start-at-task option at some point.
with_items is depricated in favor of "loop".
The conditional "when" works at a task level. So it will execute the task (for all its items) if the condition is met.
Here is a working version of what you need which also reflects the (5) recommendations made:
---
- hosts: localhost
gather_facts: false
vars:
disk_list:
- sda
- sdb
- sdc
tasks:
- name: cleans previous runs
shell: cat /dev/null > /tmp/df_result
- name: creates temporary file
shell: df -hT | grep ext4 | grep -i {{ item }} | awk '{print $1}' >> /tmp/df_result
loop: "{{ disk_list }}"
- name: creates variable
shell: cat /tmp/df_result
register: df_result
- name: shows info
shell: ls -l {{ item }} | awk '{print $1}'
loop: "{{ df_result.stdout_lines }}"
I basically just use a temporary file (/tmp/df_result) with the results you need already filtered by the "grep -i {{ item }}" used in the loop on task "creates temporary file". Then, the loop in "shows info" just iterates over an already clean list of items.
To see the result on screen, you could use "-v" option when running the playbook or if you want to save the result in a file, you could add " >> /tmp/df_final" at the end of the shell line in "shows info" task.
I accidently step on this post, which is kind of old. I'm sure you already fixed this in your environment, maybe you find a better way to do it, hopefully you did.
Regards,
Related
Below is my playbook which has a variable running_processes which contains a list of pids(one or more)
Next, I read the user ids for each of the pids. All good so far.
I then try to print the list of user ids in curr_user_ids variable using -debug module is when i get the error: 'dict object' has no attribute 'stdout_lines'
I was expecting the curr_user_ids to contain one or more entries as evident from the output shared below.
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ running_processes.stdout }}/loginuid`"
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
- debug: msg="USER ID LIST HERE:{{ curr_user_ids.stdout }}"
with_items: "{{ curr_user_ids.stdout_lines }}"
TASK [Get running processes list from remote host] **********************************************************************************************************
task path: /app/wls/startstop.yml:22
ok: [10.9.9.111] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "ps -few | grep java | grep -v grep | awk '{print $2}'", "delta": "0:00:00.166049", "end": "2019-11-06 11:49:42.298603", "rc": 0, "start": "2019-11-06 11:49:42.132554", "stderr": "", "stderr_lines": [], "stdout": "24032", "stdout_lines": ["24032"]}
TASK [Gather USER IDS of processes id before killing.] ******************************************************************************************************
task path: /app/wls/startstop.yml:59
changed: [10.9.9.111] => (item=24032) => {"ansible_loop_var": "item", "changed": true, "cmd": "id -nu `cat /proc/24032/loginuid`", "delta": "0:00:00.116639", "end": "2019-11-06 11:46:41.205843", "item": "24032", "rc": 0, "start": "2019-11-06 11:46:41.089204", "stderr": "", "stderr_lines": [], "stdout": "user1", "stdout_lines": ["user1"]}
TASK [debug] ************************************************************************************************************************************************
task path: /app/wls/startstop.yml:68
fatal: [10.9.9.111]: FAILED! => {"msg": "'dict object' has no attribute 'stdout_lines'"}
Can you please suggest why am I getting the error and how can I resolve it ?
Few points to note why your solution didn't work.
The task Get running processes list from remote host returns a newline splitted \n string. So you will need to process this and turn the output into a propper list object first.
The task Gather USER IDs from processes id before killing. is returning a dictionary containing the key results where the value is of type list, so you will need iterate over it and fetch for each element the stdout value.
This is how I solved it.
---
- hosts: "localhost"
gather_facts: true
become: true
tasks:
- name: Set default values
set_fact:
process_ids: []
user_names: []
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Register a list of Process ids (Split newline from output before)
set_fact:
process_ids: "{{ running_processes.stdout.split('\n') }}"
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ item }}/loginuid`"
register: curr_user_ids
with_items: "{{ process_ids }}"
- name: Register a list of User names (Out of result from before)
set_fact:
user_names: "{{ user_names + [item.stdout] | unique }}"
when: item.rc == 0
with_items:
- "{{ curr_user_ids.results }}"
- name: Set unique entries in User names list
set_fact:
user_names: "{{ user_names | unique }}"
- name: DEBUG
debug:
msg: "{{ user_names }}"
The variable curr_user_ids registers results of each iteration
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
The list of the results is stored in
curr_user_ids.results
Take a look at the variable
- debug:
var: curr_user_ids
and loop the stdout_lines
- debug:
var: item.stdout_lines
loop: "{{ curr_user_ids.results }}"
Ansible newbie here
Hopefully there is a simple solution to my problem
I'm trying to run SQL across a number of Oracle databases on one node. I generate a list of databases from ps -ef and use with_items to pass the dbname values.
My question is how do I display the output from each database running the select statement?
tasks:
- name: Exa check | find db instances
become: yes
become_user: oracle
shell: |
ps -ef|grep pmon|grep -v grep|grep -v ASM|awk '{ print $8 }'|cut -d '_' -f3
register: instance_list_output
changed_when: false
run_once: true
- shell: |
export ORAENV_ASK=NO; export ORACLE_SID={{ item }}; export ORACLE_HOME=/u01/app/oracle/database/12.1.0.2/dbhome_1; source /usr/local/bin/oraenv; $ORACLE_HOME/bin/sqlplus -s \"/ as sysdba\"<< EOSQL
select * from v\$instance;
EOSQL
with_items:
- "{{ instance_list_output.stdout_lines }}"
register: sqloutput
run_once: true
The loop below might work.
- debug:
msg: "{{ item.stdout }}"
loop: "{{ sqloutput.results }}"
If it does not take a look at the content of the variable and decide how to use it.
- debug: var=sqloutput
My Ansible Playbook:
#Tag --> B.6 -->
- name: Change the Security Realm to CustomRealm from ManagementRealm
command: /jboss-as-7.1.1.Final/bin/jboss-cli.sh --connect--command="/core-service=management/management-interface=http-interface:read-attribute(name=security-realm)"
register: Realm
- debug:
msg: "{{ Realm.stdout_lines }}"
The output for the above command in the message is as follows:
ok: [342f2f7bed8e] => {
"msg": [
"{",
" \"outcome\" => \"success\","
" \"result\" => \"ManagementRealm\"",
"}"
]
}
is there a way to just exact \"result\" => \"ManagementRealm\"".
I tried using the
Realm.stdout_lines.find('result')
but that fails, AWk & grep commands doesn't seem to be working here.
Any thoughts is greatly appreciated.
Thak you
I think there are a few ways you could handle this.
1) Grep the output before it gets to Ansible:
# Note the change of 'command' to 'shell'
- name: Change the Security Realm to CustomRealm from ManagementRealm
shell: /jboss-as-7.1.1.Final/bin/jboss-cli.sh --connect--command="/core-service=management/management-interface=http-interface:read-attribute(name=security-realm)" | grep -o 'result.*'
register: Realm
2) If the output from the source script is always 4 lines long, you can just grab the 3rd line:
# List indexes start at 0
- debug:
msg: "{{ Realm.stdout_lines[2] | regex_replace('^ *(.*$)', '\\1') }}"
3) The nicest way if you have an option to modify jboss-cli.sh, would be to get the jboss-cli.sh to output valid JSON which can then be parsed by Ansible:
# Assuming jboss-cli.sh produces {"outcome": "success", "result": "ManagementRealm"}
- set_fact:
jboss_data: "{{ Realm.stdout | from_json }}"
- debug:
var: jboss_data.result
I have a playbook below:
- hosts: localhost
vars:
folderpath:
folder1/des
folder2/sdf
tasks:
- name: Create a symlink
shell: "echo {{folderpath}} | awk -F'/' '{system(\"mkdir \" $1$2 );}'"
register: result
#- debug:
# msg: "{{ result.stdout }}"
with_items:
- " {{folderpath}} "
However when I run the playbook I get 2 folders made. The first one is :
1- folder1des (as expected)
2- folder2 (this should ideally be folder2sdf )
I have tried many combination and still it doesnt want to work. What do I need to have it work properly.
I do not have ansible environment at the moment. But following should work:
- hosts: localhost
tasks:
- name: Create a symlink
shell: "echo {{item}} | awk -F'/' '{system(\"mkdir \" $1$2 );}'"
register: result
#- debug:
# msg: "{{ result.stdout }}"
with_items:
- folder1/des
- folder2/sdf
Reference: Ansible Loops Example
Explanation:
You were adding a single list object to the with_items. so in your with_items it finds only one object (which is of type list) to iterate over. Hence it runs only once. So now what I have done is I have passed a list of items to with_items that way it can iterate over the multiple items present in with_items.
Hope this helps!
Maybe
- hosts: localhost
vars:
folderpath:
folder1/des
folder2/sdf
tasks:
- name: Create a symlink
file:
state : link
path : "{{ item | regex_replace('[0-9]/','_') }}"
src : "{{ item }}"
with_items: " {{ folderpath }} "
Nothing in your given code creates symlinks. Is that really what you meant to do?
I am trying to write a line to a file using lineinfile.
The name of the file is to be passed to the playbook at run time by the user as a command line argument.
Here is what the task looks like:
# Check for timezone.
- name: check timezone
tags: timezoneCheck
register: timezoneCheckOut
shell: timedatectl | grep -i "Time Zone" | awk --field-separator=":" '{print $2}' | awk --field-separator=" " '{print $1}'
- lineinfile:
path: {{ output }}
line: "Did not find { DesiredTimeZone }"
create: True
state: present
insertafter: EOF
when: timezoneCheckOut.stdout != DesiredTimezone
- debug: var=timezoneCheckOut.stdout
My questions are:
1. How do I specify the command line argument to be the destination file to write to (path)?
2. How do I append the argument DesiredTimeZone (specified in an external variables file) to the line argument?
My following answer might not be your solutions.
how to specify the command argument for output variable.
ansible-playbook yourplaybook.yml -e output=/path/to/outputfile
how to include DesiredTimeZone variable from external file.
vars_files:
- external.yml
full playbook.yml for testing on local:
yourplaybook.yml
- name: For testing
hosts: localhost
vars_files:
- external.yml
tasks:
- name: check timezone
tags: timezoneCheck
register: timezoneCheckOut
shell: timedatectl | grep -i "Time Zone" | awk -F":" '{print $2}' | awk --field-separator=" " '{print $1}'
- debug: var=timezoneCheckOut.stdout
- lineinfile:
path: "{{ output }}"
line: "Did not find {{ DesiredTimeZone }}"
create: True
state: present
insertafter: EOF
when: timezoneCheckOut.stdout != DesiredTimeZone
external.yml (place the same level with yourplaybook.yml)
---
DesiredTimeZone: "Asia/Tokyo"
With Ansible you should define the desired state. Period.
The correct way of doing this is to just use timezone module:
- name: set timezone
timezone:
name: "{{ DesiredTimeZone }}"
No need to jump through the hoops with shell, register, compare, print...
If you want to put system into the desired state, just run playbook:
ansible-playbook -e DesiredTimeZone=Asia/Tokyo timezone_playbook.yml
Ansible will ensure that all hosts in question will have the DesiredTimeZone.
If you just want to check if you system comply to desired state, use --check switch:
ansible-playbook -e DesiredTimeZone=Asia/Tokyo --check timezone_playbook.yml
In this case Ansible will just print to the log what should be changed in the current state to become desired state and don't make any actual changes.