How can I run nohup command in ansible playbook task? - ansible

I am pretty much beginner with ansible. I am trying to migrate linux scripts to perform a remote server startup using the nohup command and it does not work properly. The ansible results state change: true, but when I check if the process is up, it doesn't. Thus, I have to restart it manually.
Thanks for your help.
Here what I am doing:
- name: "Performing remote startup on server {{ SERVER }}"
command: nohup {{ JAVA_HOME }}/bin/java {{ DYNATRACE_AGENT_REUS }} -jar -Dengine={{ ENGINE_NAME }} name-{{ ENGINE_JAR_VERSION }}.jar --server.port={{ ENGINE_PORT }} >{{ CONSOLE_LOG }} 2>&1 &
args:
chdir: "{{ SOMEVAR_HOME }}"
async: 75
poll: 0
register: out
- debug:
msg: '{{ out }}'
Results:
TASK [Performing remote startup on server] ***********************************************************************************************************************************************************************************
changed: [servername]
TASK [debug] ********************************************************************************************************************************************************************************************************************************
ok: [servername] => {
"msg": {
"ansible_job_id": "160551828781.28405",
"changed": true,
"failed": false,
"finished": 0,
"results_file": "/home/admindirectory/.ansible_async/160551828781.28405",
"started": 1
}
}

Try to use shell instead of command module. Bash redirects do not work with command.
P.S. You are using ansible in a strange way. Create a systemd unit (system or user-level) and use systemd module to enable/start it.

Related

Unable to grab success or failure i.e rc of long running Ansible task in a loop

Below is my code that takes about 20 minutes in all. The task takes about 1 minute and the loop runs 20 times so 20X1=20minutes. I want the playbook to fail if any task in shell module failed i.e rc != 0
- name: Check DB connection with DBPING
shell: "java utils.dbping ORACLE_THIN {{ item | trim }}"
with_items: "{{ db_conn_dets }}"
delegate_to: localhost
Inorder, to speedup the tasks I decided to run them in parallel and then grab the success or failure of each tasks like below.
- name: Check DB connection with DBPING
shell: "java utils.dbping ORACLE_THIN {{ item | trim }}"
with_items: "{{ db_conn_dets }}"
delegate_to: localhost
async: 600
poll: 0
register: dbcheck
- name: Result check
async_status:
jid: "{{ item.ansible_job_id }}"
register: _jobs
until: _jobs.finished
delay: 10
retries: 20
with_items: "{{ dbcheck.results }}"
Now !! The task Check DB connection with DBPING completes in less than a minute for the entire loop. However, the problem is that ALL the task Result check fails even if the task Check DB connection with DBPING WOULD EVENTUALLY SUCCEED i.e rc=0
Below is one sample failing task Result check when it should have been successful
failed: [remotehost] (item={'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_no_log': False, u'ansible_job_id': u'234197058177.18294', '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/home/ansbladm/.ansible_async/234197058177.18294', 'item': u'dbuser mypass dbhost:1521:mysid', '_ansible_ignore_errors': None}) => {
"ansible_job_id": "234197058177.18294",
"attempts": 1,
"changed": false,
"finished": 1,
"invocation": {
"module_args": {
"jid": "234197058177.18294",
"mode": "status"
}
},
"item": {
"ansible_job_id": "234197058177.18294",
"changed": true,
"failed": false,
"finished": 0,
"item": "dbuser mypass dbhost:1521:mysid",
"results_file": "/home/ansbladm/.ansible_async/234197058177.18294",
"started": 1
},
"msg": "could not find job",
"started": 1
}
Can you please let me know what is the issue with my task Result check and how can i make sure it keeps trying till the task Check DB connection with DBPING completes and thereby reports success or failure upon the .rc of shell module?
It will also be great if i could get the the debug module to print all failed shell tasks of Check DB connection with DBPING.
Kindly suggest.
Although this is a partial answer to the set of queries i had. I will accept a complete answer if it comes in the future.
Adding delegate_to: localhost to task Result check inoder to resolve the behavior.
I'm however still waiting for the code to show all failed tasks in the debug.

Ansible: pvchange with shell does not get executed

I'm not able to execute the shell command pvchange -u /dev/sde with ansible.
Can you pls help me to figure out what's wrong?
vars:
tgt_string_id: TGTDBID
action_vg: "clone_{{ tgt_string_id }}vg"
host_vars_dir: /home/abhn02a/ansible/host_vars
action_host: "{{ tgt_vm | default('localhost') }}"
tgt_disks: "/dev/sde /dev/sdf"
Tasks
- name: Display target disks
debug:
msg: "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
- name: Change pv uuid
become: yes
shell: |
pvchange -u "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
#changed_when: false
delegate_to: "{{ action_host }}"
Output:
......
.......
TASK [Display target disks]
*********************************************************************************************
ok: [localhost] => (item=/dev/sde) => {
    "msg": "/dev/sde"
}
ok: [localhost] => (item=/dev/sdf) => {
    "msg": "/dev/sdf"
}
!! This is not true, uuid stays same !!!
TASK [Change pv uuid]
***************************************************************************************************
changed: [localhost] => (item=/dev/sde)
changed: [localhost] => (item=/dev/sdf)
.......
.....
When I check the uuid of the disk, It has not been changed. Then, I type "pvchange -u /dev/sde", It changes the uuid of the disk with no problem!
Have ansible 2.9 with python2.7.5
Can you help, please?
(Oh, this is my 1st question on SO, sorry for the previous formatting)
Edit
So I have almost reproduced the error at home.
This happens when I try to do a pvchange on a cloned disk (But again, only with Ansible!!)
I have cloned 2 disks (/dev/xvdc and /dev/xvde) to (/dev/xvdf and /dev/xvdg), and re-run the task, and I have following output:
TASK [Change pv uuid]
*************************************************************************
failed: [localhost] (item=/dev/xvdf) => {"ansible_loop_var": "item", "changed": true, "cmd": "pvchange -u \"/dev/xvdf\"\n", "delta": "0:00:00.038053", "end": "2021-03-06 03:46:07.526239", "item": "/dev/xvdf", "msg": "non-zero return code", "rc": 5, "start": "2021-03-06 03:46:07.488186", "stderr": " Failed to find physical volume \"/dev/xvdf\".", "stderr_lines": [" Failed to find physical volume \"/dev/xvdf\"."], "stdout": " 0 physical volumes changed / 0 physical volumes not changed", "stdout_lines": [" 0 physical volumes changed / 0 physical volumes not changed"]}
This seems to be, because the cloned disks are still not "PVs" in LVM; I do not see /dev/xvdf and /dev/xvdg when I do pvs.
But at work, this was different, I could see the cloned disks in "pvs output"
Will check again next week.
Thank you
From the dub of the execution, you can see that the command executed is pvchange -u "/dev/xvdf", which raise an error message 'Failed to find physical volume "/dev/xvdf".', so looks like it doesn't like the quotes.
You can change your task to that, it should works better:
- name: Change pv uuid
become: yes
shell: |
pvchange -u {{ item }}
loop: "{{ tgt_disks.split(' ') }}"
pvscan --cache `, before doing` pvchange -u <PV>

How to display values of two different task register variables in ansible

Below is my playbook,
---
- hosts: all
tasks:
- name: asyn task 1 use time command to see diff use when hostname to get faster output
command: sleep 15
async: 2
poll: 0
register: result1
- name: asyn task
command: sleep 2
register: result2
- name: showing result1
debug:
var: result1
var1: result2
- name: debugging output
debug: msg=this is the {‌{ result1 }} and {‌{ result2 }}
# with_items:
# - {‌{ result1 }}
# - {‌{ result2 }}
getting below error,
changed: [vishwa]
TASK [showing result1] **************************************************************************************************
fatal: [rudra]: FAILED! => {"changed": false, "msg": "'var1' is not a valid option in debug"}
fatal: [arya]: FAILED! => {"changed": false, "msg": "'var1' is not a valid option in debug"}
fatal: [vishwa]: FAILED! => {"changed": false, "msg": "'var1' is not a valid option in debug"}
to retry, use: --limit #/home/admin/ansibledemo/asynch.retry
First point, debug module doesn't have an option var1 and so the error from showing result1 task. You probably have figured that out and written debugging output task with msg option.
This brings to the second point, registering status of an asynchronous task. Since you are using poll: 0, the task would be executing in asynchronous mode so the result may not be immediately available to the registered variable. Use async_status to check the result as described here. Also, in your scenario you should use async value more than sleep period.
An example,
- name: Wait for asynchronous job to end
async_status:
jid: '{{ result1.ansible_job_id }}'
register: job_result
until: job_result.finished
retries: 5
Maybe below playbook file could help you to fulfill your requirement. If not then I have some other alternatives too.
---
- hosts: all
gather_facts: false
name: "[ Debug Senario Test ]"
become: true
become_method: sudo
tasks:
- name: "[[ Command Task 1 ]]"
command: "echo Command Task 1"
register: register1
- name: "[[ Command Task 2 ]]"
command: "echo Command Task 2"
register: register2
- name: "[[ Display Registers ]]"
command: "echo {{ item.stdout }}"
with_items:
- "{{ register1 }}"
- "{{register2}}"
register: register3
- debug:
msg: "{{ register3 }}"
debug does not have a parameter var1, or var. You can't just create a parameter for a module. The error message is clear.
If you would like to store the result1 or result2 then use set_fact module.
You are executing a task in async running in the background. Async task follows a fire and forget execution. That task will be triggered and in running state while the other is being triggered. Use 2 https://docs.ansible.com/ansible/latest/modules/async_status_module.html
for fetching the result of the async task.
Ref:https://docs.ansible.com/ansible/2.7/user_guide/playbooks_async.html

Display Debug output in a clean format Ansible

I am trying to display the message output of a debug command in a nice format in Ansible. At the moment this is how the output looks:
TASK [stop : Report Status of Jenkins Process] *******************************************************************************************************************************
ok: [localhost] => {
"msg": "Service Jenkins is Running.\nReturn code from `grep`:\n0\n"
}
TASK [stop : debug] **********************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": false,
"failed": false,
"msg": "Service Jenkins is Running.\nReturn code from `grep`:\n0\n"
}
}
How do I get rid of the '\n' character and replace with a new line?
The code below using the split('\n') does not work.
- name: Checking Jenkins Process
shell: "ps -ef | grep -v grep | grep -v dhclient | grep jenkins"
ignore_errors: yes
register: jenkins_process
- debug:
var: jenkins_process.rc
- name: Report Status of Jenkins Process
fail:
msg: |
Service Jenkins is not found
Return code from `grep`:
{{ jenkins_process.rc }}
when: jenkins_process.rc != 0
register: report
- name: Report Status of Jenkins Process
debug:
msg: |
Service Jenkins is Running.
Return code from `grep`:
{{ jenkins_process.rc }}
when: jenkins_process.rc == 0
register: report
- debug:
msg: "{{ report.split('\n') }}"
- name: Stop Jenkins Service
service:
name: jenkins
state: stopped
Is there a way to display this in a nice way?
You can use the debug callback plugin.
You can specify it on command line:
ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook ...
Or in your default section of your ansible.cfg configuration file:
stdout_callback = debug

Using {{ and }} in ansible shell command

I have this in my playbook:
- name: Get facts about containers
shell: "docker ps -f name=jenkins --format {%raw%}{{.Names}}{% endraw %}"
register: container
Note, that I inserted the {%raw%} and {%endraw%} so that ansible does not evaulate the '{{'.
If I run this, I get this error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "{u'cmd': u'docker ps -f name=jenkins --format {{.Names}}', u'end': u'2017-01-19 10:04:27.491648', u'stdout': u'ecs-sde-abn-17-jenkins-ec8eccee8c9eb8e38f01', u'changed': True, u'start': u'2017-01-19 10:04:27.481090', u'delta': u'0:00:00.010558', u'stderr': u'', u'rc': 0, 'stdout_lines': [u'ecs-task-17-jenkins-ec8eccee8c9eb8e38f01'], u'warnings': []}: template error while templating string: unexpected '.'. String: docker ps -f name=jenkins --format {{.Names}}"}
In other words, the command ran succesfully (the output ecs-task-17-jenkins-ec8eccee8c9eb8e38f01 is correct), but than it tries the template the string ... again?
What is going wrong here and how can I fix this?
EDITED
Escaping the {{ like this:
shell: "docker ps -f name=jenkins --format {{ '{' }}{.Names}{{ '}' }}"
results in the same error.
This is a problem introduced in Ansible 2.2.1: https://github.com/ansible/ansible/issues/20400.
Happens when registering variable with {{ inside.
This workaround prevent {{ from appearing in registered variable:
shell: "docker inspect --format '{''{ .NetworkSettings.IPAddress }''}' consul"
Way to reproduce the bug:
- hosts: localhost
gather_facts: no
tasks:
- shell: "echo '{''{.Names}''}'"
register: myvar
- debug: var=myvar

Resources