How to register just the major version of an OpenShift cluster? - ansible

- name: get ocp version
shell: "oc get clusterversion | awk '{print $2}'| tail -1"
register: ver
I have used above task to register the output in Ansible task, but ver will have minor version as well - 4.8.1, I need only major version i.e., 4.8.

Q: "I need only major version i.e., 4.8"
A: Given the registered variable
ver.stdout: 4.8.1
the simplest option is to use the filter splitext. For example,
ver_major: "{{ ver.stdout|splitext|first }}"
gives
ver_major: '4.8'

Here, Note that the command is not wrapped in double quotes.
- name: get ocp version
shell: oc get clusterversion | awk 'END{split($2,a,".");print a[1] "." a[2]}'
register: ver

An option with grep
oc get clusterversion --no-headers | grep -o '[4].[0-9]' | head -1
directly, for the version only, an option with cut
oc get clusterversion -o jsonpath='{.items[].spec.desiredUpdate.version}{"\n"}' | cut -d '.' -f -2
which will both result into
4.8
If you are interested in the specs or upgrade channels and releases of your OpenShift Cluster Distribution
- name: OC get upgrade channel
shell:
cmd: oc get clusterversion -o jsonpath='{.items[].spec.channel}{"\n"}'
# Since this is a reporting task ...
changed_when: false
check_mode: false
register: ver
which would result into
stable-4.8
More interesting information ...
oc get clusterversion -o jsonpath='{.items[]}{"\n"}' | jq .
Further Documentation
Gathering data about your cluster
OpenShift Admins Guide to jsonpath

Related

stdout issue on Generate Thread dump playbook

I want to create thread dump files on target host using ansible for this purpose I need to get PID and UID of process I need to Generate Thread dump for.
I try run the following lines but instead to get one value (PID) I get 2 values ,
one - PID (2229) and second - I think it PID of Ansible process that running remotely.
I get: "pid.stdout": "2229\n2456601"
that actually 2229 value I need to store as pid.
- name: Generate treaddump.
hosts: all
gather_facts: true
tasks:
- name: get PID.
shell: "ps aux | grep -i jira | pgrep -i java | awk -F '[ ]*' '{print $2}'"
register: pid
- debug:
var: pid.stdout
- name: get User.
shell: "ps aux | grep -i jira | grep -i java | awk -F '[ ]*' '{print $1}'"
register: user
any idea how to get only the PID ?
what I'm doing wrong ?
thank you
I tried to run the playbook as command - I got same output

Ansible find if the server has websphere running on it or not

I'm in needing of help. I currently have to find if the server has websphere running on it or not. I can do it through ps -ef|grep websphere. Problem i'm facing is we can use only "raw" module as other modules wont run on old linux OS. I'm thinking of doing using the below code but not sure how to take the output of it and pass it in a file that gives server name and 0 or 1 where 0 is false and 1 is true.
---
- name: To find the websphere servers
hosts: websphere
tasks:
- name:
raw: "if [[ $(ps aux | grep cron | grep -vc grep) > 0 ]] ; then echo 1; else echo 0 ; fi"
Please try the below approach.
- name: Getting process IDs of the process
pids:
name: cron
register: cron_pids
- name: Printing the process IDs obtained
debug:
msg: "Process IDs of cron:{{cron_pids.pids|join(',')}}"
(OR)
You can use shell module to retrieve the info using registered variable
- name: Get running processes list from remote host
shell: "ps -aux --no-headers | grep cron | awk '{print $2}'"
register: running_processes
- debug:
msg: "{{running_process}}"

Ansible module lineinfile with variable path

I need to use the Ansible lineinfile module in such a way that it operates on a variable path. (This is for Ansible 2.5.2.) In this example the filename should depend on the version of PostgreSQL that is actually installed on a remote host (instead of a hardwired version 9.6):
- lineinfile:
path: /etc/postgresql/9.6/main/postgresql.conf
regexp: '^#?\s*log_connections\s*='
line: 'log_connections = on'
state: present
In bash I would use e.g. this expression for obtaining the version and the path:
/etc/postgresq/$(pg_lsclusters -h | awk '{print $1}' | head -n 1)/main/postgresql.conf
It apparently does not work verbatim as parameter path to Ansible's lineinfile module:
FAILED! => {"changed": false, "msg": "Destination
/etc/postgresq/$(pg_lsclusters -h | awk '{print $1}' | head -n
1)/main/postgresql.conf does not exist !", "rc": 257}
So my question is this: How can I form a variable path with Ansible in this use case?
This seems to work fine:
- name: Got it!
command: bash -c "pg_lsclusters -h | awk '{print $1; exit}'"
register: version
- set_fact: version='{{version.stdout}}'
- lineinfile:
path: "/etc/postgresql/{{version}}/main/postgresql.conf"
regexp: '^#?\s*log_connections\s*='
line: 'log_connections = on'
state: present

Ansible error on shell command returning zero

Ansible doesn't seem to be able to handle the result '0' for shell commands. This
- name: Check if swap exists
shell: "swapon -s | grep -ci dev"
register: swap_exists
Returns an error
"msg": "non-zero return code"
But when I replace "dev" with "type", which actually always occurs and gives a count of at least 1, then the command is successful and no error is thrown.
I also tried with command: instead of shell: - it doesn't give an error, but then the command is also not executed.
since you want to run a sequence of commands that involve pipe, ansible states you should use shell and not command, as you are doing.
So, the problem is the fact that grep returns 1 (didnt find a match on the swapon output), and ansible considers this a failure. Since you are well sure there is no issue, just add a ignore_errors: true and be done with it.
- name: Check if swap exists
shell: "swapon -s | grep -ci non_existent_string"
register: swap_exists
ignore_errors: true
OR:
if you want to narrow it down to return codes 0 and 1, instruct ansible to not consider failures those 2 rcs:
- name: Check if swap exists
shell: "swapon -s | grep -ci non_existent_string"
register: swap_exists
# ignore_errors: true
failed_when: swap_exists.rc != 1 and swap_exists.rc != 0
I found a better way. if you only need to know the record number this works:
- name: Check if swap exists
shell: "swapon -s | grep -i dev|wc -l"
register: swap_exists
Another way is to always use cat at the end of the pipe. See Ansible shell module returns error when grep results are empty
- name: Check if swap exists
shell: "swapon -s | grep -i dev|cat"
register: swap_exists
You can also parse the grep count result in awk and return your custom output. This will avoid the ignore_errors module.
- name: Check if swap exists
shell: "swapon -s | grep -ci dev" | awk '{ r = $0 == 0 ? "false":"true"; print r }'
register: swap_exists

Ansible adhoc command in sequence

I want to run ansible adhoc command on a list of EC2 instances. I want ansible to run it in sequence but ansible runs them in random. For example:
13:42:21 #cnayak ansible :► ansible aws -a "hostname"
ec2 | SUCCESS | rc=0 >>
ip-172-31-36-255
ec3 | SUCCESS | rc=0 >>
ip-172-31-45-174
13:42:26 #cnayak ansible :► ansible aws -a "hostname"
ec3 | SUCCESS | rc=0 >>
ip-172-31-45-174
ec2 | SUCCESS | rc=0 >>
ip-172-31-36-255
Any way to make them run in order?
By default ansible runs tasks in parallel. If you want them to be executed serially then you can limit number of workers running at the same time by using "--forks" option.
Adding "--forks 1" to your ansible invocation should run your command sequentially on all hosts (in order defined by inventory).
You can use the forks with adhoc command and serial: 1 inside the playbook.
On adhoc command:
ansible aws -a "hostname" --forks=1
Inside the playbook:
- hosts: aws
become: yes
gather_facts: yes
serial: 1
tasks:
- YOUR TASKS HERE
--forks=1 hasn't been sorting inventory for me in recent versions of ansible (2.7)
Another approach I find useful is using the "oneline" output callback, so I can use the standard sort and grep tools on the output:
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=oneline \
ansible \
-m debug -a "msg={{ansible_host}}\t{{inventory_hostname}}" \
| sort \
| grep -oP '"msg": \K"[^"]*"' \
| xargs -n 1 echo -e
This has been useful for quick-n-dirty reports on arbitrary vars or (oneline) shell command outputs.

Resources