Ansible get column from stdout - ansible

I'm new to Ansible an thus this question may seem silly to more advanced users.
Anyway, I need to get the value 362496 for the column LDFree.
I know I can use the shell module with pipes and awk, but I was wondering if it's posisble to achive it in Ansible using some sort of "filter" for STDOUT?
This is the STDOUT from the CLI:
-------------------------(MB)-------------------------
CPG ---EstFree---- -------Usr------- ---Snp---- ---Adm---- -Capacity Efficiency-
Name RawFree LDFree Total Used Total Used Total Used Compaction Dedup
SSD_r6 483328 362496 12693504 12666880 12288 2048 8192 1024 1.0 -

You can done this knowing the fact that Ansible/Jinja support calling methods of native types:
- command: cat test.txt
register: cmd_res
- debug:
msg: "{{ cmd_res.stdout_lines[3].split()[2] }}"
stdout_lines[3] – take forth line, .split() – split it into tokens, [2] – take third token.

Related

How to run 1 playbook for the same group by multiple plays aka threaded

Current setup that we do have ~2000 servers (in 1 group)
I would like to know if there is a way to run x.yml on all the group (where all the 2k servers are in ) but with multiple plays (threaded , or something)
ansible-playbook -i prod.ini -l my_group[50%] x.yml
ansible-playbook -i prod.ini -l my_group[other 50%] x.yml
solutions with awx or ansible-tower are not relevant.
using even 500-1000 forks didn't gave any improvement
try to combine forks, and the free strategy.
the default behavior of Ansible is:
Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks.
So event if your increase the forks number, the tasks on special forks will still wait any host finish to go ahead. The free strategy allows each host to run until the end of the play as fast as it can
- hosts: all
strategy: free
tasks:
# ...
ansible-playbook -i prod.ini -f 500 -l my_group x.yml
As mentioned above, you should preferably increase fork and set the strategy to free. Increasing fork will help you run the playbook on more server and setting the strategy to free would allow you to run a task for servers independently without waiting for others.
Please refer to below doc for more clarifaction.
docs
resolved by using patterns my_group[:1000] and my_group[999:]
forks didnt give any time decrease in my case.
also free strategy did multiplied the time which was pretty weird.
also debugging free strategy summary is free difficult when u have 2k servers and about 50 tasks in playbook .
thanks everyone for sharing
much appreciated

AnsibleUnsafeText enhancement to an existing playbook

I have a playbook that includes a set of tasks to obtain the status of an IBM MQ Channel. The playbook is passed the Channel name and I run the runmqsc command on each server
and register the output which unfortunately is on many lines, so i need to get this onto one line
echo "DISPLAY CHS('{{CHANNEL_NAME}}') STATUS RQMNAME"| runmqsc {{QMGR}}|grep -v DISPLAY|sed 's/^[^ ].*$/%/g' | tr -s " " | tr -d "\n" | tr "%" "\n"|grep CHANNEL|sed 's/ CURRENT//g' | sed 's/^ //g'|sed 's/ *$//g'
which gave the output
CHANNEL(CHANNEL) CHLTYPE(CLUSSDR) CONNAME(1.2.3.4(1414)) RQMNAME(QMGR) STATUS(RUNNING) SUBSTATE(MQGET) XMITQ(XMIT_QUEUE)
this I parsed to create a list CHSstatus (when CHANNEL_NAME is a wildcard I can have more than one result). From this list I then set variables to the STATUS of the channel.
- set_fact:
allStopped: "{%if splitStatus.STATUS == 'STOPPED'%}{{allStopped + [splitStatus.CHANNEL]}}{%else%}{{allStopped}}{%endif%}"
allRunning: "{%if splitStatus.STATUS == 'RUNNING'%}{{allRunning + [splitStatus.CHANNEL]}}{%else%}{{allRunning}}{%endif%}"
mixedState: "{%if splitStatus.STATUS != 'STOPPED' and splitStatus.STATUS != 'RUNNING'%}{{mixedState + [splitStatus.CHANNEL]}}{%else%}{{mixedState}}{%endif%}"
with_items: "{{CHSstatus}}"
loop_control:
loop_var: splitStatus
This has been working fine for a while with no issues. The problem with this is that you actually need to know the Channel name and as I wanted to enhance it to get the status of all the channels in a CLUSTER I put a 'wrapper' task in front of it to collect the channel names and use these to pass them.
My playbook now calls the MQ command to get these channel names, we can pass the CLUSTER name and register the result in clusterChannels, and include the previously working task with each channel name
- name: Status of Cluster channels
include: channelstatus.yml CHANNEL_NAME={{item}}
with_items: "{{clusterChannels.stdout_lines}}"
This now fails when trying to access the CHSStatus list, I get the error is ERROR! Unexpected Exception: unhashable type: 'dict'
I have tried both methods and compared some variables for each call of the original tasks
Original
"CHSstatus = [{'STATUS': u'RUNNING', 'CHANNEL': u'CHANNEL1'}]",
"CHSstatus Type = list",
"CHANNEL_NAME = CHANNEL1",
"CHANNEL_NAME type = AnsibleUnicode"
with the wrapper
"CHSstatus = [{'STATUS': u'RUNNING', 'CHANNEL': u'CHANNEL1'}]",
"CHSstatus Type = list",
"CHANNEL_NAME = CHANNEL1",
"CHANNEL_NAME type = AnsibleUnsafeText"
I have tried to find information about what AnsibleUnsafeText variable is and found that
'Since you are registering the result of a command in a variable, Ansible can't know what will be the content which becomes delivered. Therefore the registered Text output is marked as Unsafe.'
This has confused me a bit as I am running two commands, one to get the list of Channels in the wrapper and another to get the status of each individual channel in the original unchanged code. The registered variable is of type list but when I pass each channel from the wrapper it becomes an AnsibleUnsafeText type, I noticed that each of the registered variables for both commands when run with_items do indeed have an AnsibleUnsafeText type.
Can i convert this in any way? I have seen answers on how to convert to int, and i have tried item|string but this did not work either.
My original playbook used roles and i created a cutdown version of this using tasks only, this included cut down versions of channelstatus.yml and this worked OK, i then converted this to a role again including the same file and this failed.
One thing i notices in both cases of my MQ command registered results gets parsed successfully but the error occurs when I try to use this list of dict.
Example below is a list which contains a single dict, this can be a list of multiple dict's when run
CHSstatus = [{'STATUS': u'RUNNING', 'CHANNEL': u'CHANNEL1'}]
and a simple debug command is enough for this error to fail the playbook...
- debug:
msg:
- "{{item.CHANNEL}}"
- "{{item.STATUS}}"
with_items: "{{CHSstatus}}"
ERROR! Unexpected Exception: unhashable type: 'dict'
however
- debug:
msg:
- "{{CHSstatus[0].CHANNEL}}"
- "{{CHSstatus[0].STATUS}}"
works fine, which really does not make sense
Any help appreciated

How do I check the succesfull retry with separate command in ansible?

Ansible 2.9.6
There is standard way to use retry in Ansible.
- name: run my command
register: result
retries: 5
delay: 60
until: result.rc == 0
shell:
cmd: >
mycommand.sh
until is a passive check here.
How can I do the check with the separate command? Like "retry command A several times until command B return 0"
Of cause I may put both commands inside shell execution "commandA ; commandB" and I will get exit status of the second one for the result.rc. But is any Ansible way to do this?
The ansible way would point towards retries over a block or include that contains a command task for each script. However, that's not supported. You can use a modified version of the workaround described here, though, but it starts to get complicated. Therefore, you may prefer to take Zeitounator's suggestion.

Use ansible for manual staged rollout using `serial` and unknown inventory size

Consider an Ansible inventory with an unknown number of servers in a nodes key.
The script I'm writing should be usable with different inventories that should be as simple as possible and are out of my control, so I don't know the number of nodes ahead of time.
My command to run the playbook is pretty vanilla and I can freely change it. There could be two separate commands for both rollout stages.
ansible-playbook -i $INVENTORY_PATH playbooks/example.yml
And the playbook is pretty standard as well and can be adjusted:
- hosts: nodes
vars:
...
remote_user: '{{ sudo_user }}'
gather_facts: no
tasks:
...
How would I go about implementing a staged execution without changing the inventory?
I'd like to run one command to execute the playbook for 50% of the inventory first. Here the result needs to be checked manually by a human. Then I'd like to use another command to execute the playbook for the other half. The author of the inventory should not have to worry about this. All machines below the nodes key are the same.
I've looked into the serial keyword, but it doesn't seem like I could automatically end execution after one batch and then later come back to continue with the second half.
Maybe something creative could be done with variables passed to ansible-playbook? I'm just wondering, shouldn't this be a common use-case? Are all staged rollouts supposed to be fully automated?
Without even using serial here is a possible very simple scenario.
First get a calculation of $half of the inventory by inspecting the inventory itself. The following is enabling the json callback plugin for the ad hoc command and making sure it is the only plugin enabled. It is also using jq to parse the result. You can adapt to any other json parser (or even use the yaml callback with a yaml parser if your prefer). Anyway, adapt to your own needs.
half=$( \
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=json \
ANSIBLE_CALLBACK_WHITELIST=json \
ansible localhost -i yourinventory.yml -m debug -a "msg={{ (groups['nodes'] | length / 2) | round(0, 'ceil') | int }}" \
| jq -r ".plays[0].tasks[0].hosts.localhost.msg" \
)
Then launch your playbook limiting to the first $half nodes with whatever vars are needed for human check, and launch it again for the remainder nodes without check.
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[0:$(($half-1))] -e human_check=true
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[$half:] -e human_check=false

SaltStack: edit yaml file on minion host based on salt pillar data

Say the minion host has a default yaml configuration named myconf.yaml. What I want to do is to edit parts of those yaml entries using values from a pillar. I can't even begin to think how to do this on Salt. The only think I can think of is to run a custom python script on the host via cmd.run and feed it with input via arguments, but this seems overcomplicated.
I want to avoid file.managed. I cannot use a template, since the .yaml file is big, and can change by external means. I just want to edit a few parameters in it. I suppose a python script could do it but I thought salt could do it without writing s/w
I have found salt.states.file.serialize with the merge_if_exists option, I will try this and report.
You want file.serialize with the merge_if_exists option.
# states/my_app.sls
something_conf_file:
file.serialize:
- name: /etc/my_app.yaml
- dataset_pillar: my_app:mergeconf
- formatter: yaml
- merge_if_exists: true
# pillar/my_app.sls
my_app:
mergeconf:
options:
opt3: 100
opt4: 200
On the target, /etc/my_app.yaml might start out looking like this (before the state is applied):
# /etc/my_app.yaml
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 3
opt4: 4
And would look like this after the state is applied:
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 100
opt4: 200
As far as I can tell this uses the same algorithm as pillar merges, so e.g. you can merge or partially overwrite dictionaries, but not lists; lists can only be replaced whole.
This can be done for both json and yaml with file.serialize. Input can be inline on the state or come from a pillar. A short excerpt follows:
state:
cassandra_yaml:
file:
- serialize
# - dataset:
# concurrent_reads: 8
- dataset_pillar: cassandra_yaml
- name: /etc/cassandra/conf/cassandra.yaml
- formatter: yaml
- merge_if_exists: True
- require:
- pkg: cassandra-pkgs
pillar:
cassandra_yaml:
concurrent_reads: "8"

Resources