Unable to run top on a remote host via Ansible - ansible

I have the following playbook:
---
- hosts: ESNodes
remote_user: ihazan
tasks:
- name: Run Monitoring
action: command /tmp/monitoring/cpu_mon
The content of /tmp/monitoring/cpu_mon is as follows:
top -bn1800 -p $(ps -ef | grep elasticsearch | grep -v grep | grep -v sudo | awk '{print $2}') | grep root > /tmp/cpu_stats &
Pay attention that top is run the the background with &
When running that playbook Ansible get stuck forever on the top command:
-bash-4.1$ ansible-playbook es_playbook_run.yml -l PerfSetup -K -f 10
sudo password:
PLAY [ESNodes] ****************************************************************
GATHERING FACTS ***************************************************************
ok: [isk-vsrv643]
TASK: [Run Monitoring] ********************************************************
When running it via remote SSH(which is what ansible should do) it works fine:
-bash-4.1$ ssh ihazan#isk-vsrv643 'nohup /tmp/monitoring/cpu_mon'
-bash-4.1$
Following is the debug version of the output:
-bash-4.1$ ansible-playbook es_playbook_run.yml -l PerfSetup -K -f 10 -vvvv
sudo password:
PLAY [ESNodes] ****************************************************************
GATHERING FACTS ***************************************************************
<isk-vsrv643> ESTABLISH CONNECTION FOR USER: ihazan on PORT 22 TO isk-vsrv643
<isk-vsrv643> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430 && chmod a+rx $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430 && echo $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430'
<isk-vsrv643> REMOTE_MODULE setup
<isk-vsrv643> PUT /tmp/tmpZh9bYP TO /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/setup
<isk-vsrv643> EXEC /bin/sh -c '/usr/bin/python /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/setup; rm -rf /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/ >/dev/null 2>&1'
ok: [isk-vsrv643]
TASK: [Run Monitoring] ********************************************************
<isk-vsrv643> ESTABLISH CONNECTION FOR USER: ihazan on PORT 22 TO isk-vsrv643
<isk-vsrv643> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545 && chmod a+rx $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545 && echo $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545'
<isk-vsrv643> REMOTE_MODULE command /tmp/monitoring/cpu_mon
<isk-vsrv643> PUT /tmp/tmp7dYRPY TO /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/command
<isk-vsrv643> EXEC /bin/sh -c '/usr/bin/python /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/command; rm -rf /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/ >/dev/null 2>&1'
Thx in advance

Use fire and forget mode, i.e. async + poll 0 :
---
- hosts: ESNodes
remote_user: ihazan
tasks:
- name: Run Monitoring
action: command /tmp/monitoring/cpu_mon
async: 45
poll: 0
Whole scoop on async is here : http://docs.ansible.com/playbooks_async.html
Good luck.

Related

Ansible playbook with become_method=pbrun not working

I am a beginner on Ansible. I am trying to run command as a db user and we have pbrun setup for changing users in my company.
Below is how my pbrun policy is defined as :
[ RunAs User ] [ Command ]
root /bin/su - couchbase
root /bin/su - enterprisedb
From ansible.cfg (only sharing privilege_escalation part):
[privilege_escalation]
become=true
become_method=pbrun
become_user=''
become_ask_pass=False
become_flags: '/bin/su - enterprisedb'
From playbook :
$ cat ping.yml
- name: Test
hosts: all
gather_facts: false
any_errors_fatal: false
tasks:
- shell: whoami
register: output
- debug:
msg: "{{output.stdout}}"
Below is how I am running playbook.
ansible-playbook -i sample.host1.list ping.yml -k -vvvv
Output :
$ ansible-playbook -i sample.host1.list ping.yml -k -vvvv
ansible-playbook 2.8.12
config file = /home/ads_username/ansible_work_dir/ansible.cfg
configured module search path = [u'/adshome/ads_username/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /home/ads_username/ansible_work_dir/ansible.cfg as config file
SSH password:
setting up inventory plugins
host_list declined parsing /home/ads_username/ansible_work_dir/sample.host1.list as it did not pass it's verify_file() method
auto declined parsing /home/ads_username/ansible_work_dir/sample.host1.list as it did not pass it's verify_file() method
yaml declined parsing /home/ads_username/ansible_work_dir/sample.host1.list as it did not pass it's verify_file() method
Parsed /home/ads_username/ansible_work_dir/sample.host1.list inventory source with ini plugin
Loading callback plugin debug of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/debug.pyc
Loading callback plugin profile_tasks of type aggregate, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/profile_tasks.pyc
PLAYBOOK: ping.yml ******************************************************************************************************************************************
Positional arguments: ping.yml
ask_pass: True
become_method: pbrun
inventory: (u'/home/ads_username/ansible_work_dir/sample.host1.list',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: paramiko
timeout: 10
become: True
1 plays in ping.yml
PLAY [Adding VM to inventory] *******************************************************************************************************************************
META: ran handlers
TASK [shell] ************************************************************************************************************************************************
task path: /home/ads_username/ansible_work_dir/ping.yml:6
Wednesday 10 November 2021 15:21:32 -0700 (0:00:00.053) 0:00:00.053 ****
<server_name.region.company.com> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: None on PORT 22 TO server_name.region.company.com
<server_name.region.company.com> EXEC /bin/bash -c '( umask 77 && mkdir -p "` echo /tmp `"&& mkdir /tmp/ansible-tmp-1636582892.39-15614-57850062632655 && echo ansible-tmp-1636582892.39-15614-57850062632655="` echo /tmp/ansible-tmp-1636582892.39-15614-57850062632655 `" ) && sleep 0'
<server_name.region.company.com> Attempting python interpreter discovery
<server_name.region.company.com> EXEC /bin/bash -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<server_name.region.company.com> Python interpreter discovery fallback (pipelining support required for extended interpreter discovery)
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<server_name.region.company.com> PUT /adshome/ads_username/.ansible/tmp/ansible-local-155953Afqz2/tmpgNQgMu TO /tmp/ansible-tmp-1636582892.39-15614-57850062632655/AnsiballZ_command.py
<server_name.region.company.com> EXEC /bin/bash -c 'chmod u+x /tmp/ansible-tmp-1636582892.39-15614-57850062632655/ /tmp/ansible-tmp-1636582892.39-15614-57850062632655/AnsiballZ_command.py && sleep 0'
<server_name.region.company.com> EXEC /bin/bash -c 'echo BECOME-SUCCESS-sgemmsfapenzvcsbxdnbjneynirmhzkl; echo "/usr/bin/python /tmp/ansible-tmp-1636582892.39-15614-57850062632655/AnsiballZ_command.py"|pbrun /bin/su - enterprisedb && sleep 0'
<server_name.region.company.com> EXEC /bin/bash -c 'rm -f -r /tmp/ansible-tmp-1636582892.39-15614-57850062632655/ > /dev/null 2>&1 && sleep 0'
[WARNING]: Platform linux on host server_name.region.company.com is using the discovered Python interpreter at /usr/bin/python, but future installation of
another Python interpreter could change this. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.
fatal: [server_name.region.company.com]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"rc": 2
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Last login: Wed Nov 10 15:21:02 MST 2021
/usr/bin/python: can't open file '/tmp/ansible-tmp-1636582892.39-15614-57850062632655/AnsiballZ_command.py': [Errno 13] Permission denied
PLAY RECAP **************************************************************************************************************************************************
server_name.region.company.com : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Wednesday 10 November 2021 15:21:35 -0700 (0:00:03.141) 0:00:03.194 ****
===============================================================================
shell ------------------------------------------------------------------------------------------------------------------------------------------------ 3.14s
/home/ads_username/ansible_work_dir/ping.yml:6 --------------------------------------------------------------------------------------------------------------------
Please help guide me on what is wrong in my setup and if it is possible to make this work without changing anything in my pbrun policy.

Skip confirmation in Ansible during deleting node with Kubspray

I'm trying to run Ansible playbook remove-node.yml from Kubespray repository.
But when I run a job I get an error:
TASK [check confirmation] ******************************************************
fatal: [node61]: FAILED! => {"changed": false, "msg": "Delete nodes confirmation failed"}
I'm doing it through GitLabCI and here is my .gitlab-ci.yml:
stages:
- deploy
image: ***/releases/kubespray:v2.12.5
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
before_script:
- mkdir -p ~/.ssh
- echo "$id_rsa" | base64 -d > ~/.ssh/id_rsa
- chmod -R 700 ~/.ssh
delete_node:
stage: deploy
when: manual
script:
- ansible-playbook -v -u root --key-file=~/.ssh/id_rsa --extra-vars skip_confirmation=yes -i inventory/hosts.ini /kubespray/remove-node.yml -e "node=node61"
I've tried check_confirmation, skip_confirmation=true, True or 'true' and other variations, but none of them works
The required variable is delete_nodes_confirmation not skip_confirmation.
So the answer is delete_nodes_confirmation=yes.
Can you run your script like this?
ansible-playbook -v -u root --key-file=~/.ssh/id_rsa -i inventory/hosts.ini /kubespray/remove-node.yml --extra-var "node=node61 skip_confirmation=true"

starting jboss server using ansible and returning back control [duplicate]

This question already has answers here:
Ansible Command module says that '|' is illegal character
(2 answers)
Closed 5 years ago.
Below yaml playbook restarts the jboss server but doesnt get back control to execute next ansible command. I have also used wait for module to stop waiting for current command result and go for next command. But still ansbile hangs on current command indefinitely . Please let me know when I went wrong?
---
- hosts: test1
tasks:
- name: simple command
become: true
command: whoami
register: output
- debug:
msg: "I gave the command whoami and the out is '{{output.stdout}}'"
- name: change to jboss user
become: true
become_user: jboss
command: whoami
register: output
- debug:
msg: "I gave the command whoami and the out is '{{output.stdout}}'"
- name: start jboss server as jboss user
become: true
become_user: jboss
command: sh /usr/jboss/bin/run.sh -c XXXXXXXX -b x.x.x.x &
when: inventory_hostname in groups['test1']
register: restartscript
- debug:
msg: "output of server restart command is '{{restartscript.stdout}}'"
- name: waiting for server to come back
local_action:
module: wait_for
timeout=20
host=x.x.x.x
port=8080
delay=6
state=started
terminal output message
ESTABLISH SSH CONNECTION FOR USER: XXXXXXXXXXX
SSH: EXEC sshpass -d12 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=XXXXXXXXX -o ConnectTimeout=10 -o ControlPath=/home/tcprod/XXXXXXXXXX/.ansible/cp/ansible-ssh-%h-%p-%r -tt X.X.X.X '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=hvgwnsbxpkxvbcmtcfvvsplfphdrevxg] password: " -u jboss /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hvgwnsbxpkxvbcmtcfvvsplfphdrevxg; /usr/bin/python /tmp/ansible-tmp-XXXXXXXXX.XX-XXXXXXXXXXXXXX/command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
The & is not allowed in Ansible "command".
command: sh /usr/jboss/bin/run.sh -c XXXXXXXX -b x.x.x.x &
Try removing it or use shell instead of command.
From Ansible documentation about command:
The given command [...] will not
be processed through the shell, so variables like $HOME and operations
like "<", ">", "|", ";" and "&" will not work (use the shell module if
you need these features).

Ansible job prompting for password

I have an Ansible job that ensures certain directories are present on remote servers and then copies files into them.
---
- hosts: cac
tasks:
- name: Create Required directories.
file: path=/opt/app/ca/{{ item }} state=directory mode=0755 owner=admin group=admin
with_items:
- cac/webapps
- cac/iam_config
- name: Copy and unarchive webapps node.
synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/app/ca/iam_cac checksum=yes
My environmnet file is:
[cac]
10.169.99.70
10.169.99.72
[cac:vars]
ansible_ssh_user=admin
ansible_ssh_pass=xyz
When I run the job, in debug mode I can see that Task one is run as admin user and no password is prompted from me.
But the second task asks me for the admin password.
TASK [Copy and unarchive webapps node.] ****************************************
task path: /home/ansible/playbooks/release-deploy.yaml:10
<10.169.99.70> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<10.169.99.70> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946 `" && echo ansible-tmp-1477753023.09-93847262523946="` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946 `" ) && sleep 0'
<10.169.99.72> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<10.169.99.72> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306 `" && echo ansible-tmp-1477753023.09-27220657560306="` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306 `" ) && sleep 0'
<10.169.99.70> PUT /tmp/tmpBP7rLm TO /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/synchronize
<10.169.99.72> PUT /tmp/tmpVKR5A9 TO /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/synchronize
<10.169.99.70> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/synchronize; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/" > /dev/null 2>&1 && sleep 0'
<10.169.99.72> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/synchronize; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/" > /dev/null 2>&1 && sleep 0'
admin#10.169.99.72's password: admin#10.169.99.70's password:
I am confused as to why this step is requiring me to enter the password when I have configured it in my environment file.
Secondly why does it say?
ESTABLISH LOCAL CONNECTION FOR USER: ansible
The first sentence in the synchronize module doc page answers your second question (why does it say ESTABLISH LOCAL CONNECTION FOR USER: ansible:?
synchronize is a wrapper around the rsync command, meant to make common tasks with rsync easier. It is run and originates on the local host where Ansible is being run.
As for the first question, the parameters section in the same manual explains you need to use the following argument:
use_ssh_args
(default: no)
Use the ssh_args specified in ansible.cfg.
While it refers only to ansible.cfg, it refers to the variables defined in the inventory file as well.

Ansible lineinfile give an error with /etc/hosts

I have this simple task in my role:
- name: Updating the /etc/hosts
lineinfile: dest=/etc/hosts line="192.168.99.100 {{ item }}"
with_items:
- domain1.com
- domain2.com
tags: etc
When I run my Ansible playbook:
robe:ansible-develop robe$ ansible-playbook -i inventory develop-env.yml -vvvv --extra-vars "user=`whoami`" --tags etc --become-user=robe --ask-become-pass
SUDO password:
PLAY [127.0.0.1] **************************************************************
GATHERING FACTS ***************************************************************
<127.0.0.1> REMOTE_MODULE setup
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p /tmp/ansible-tmp-1446050161.27-256837595805154 && chmod a+rx /tmp/ansible-tmp-1446050161.27-256837595805154 && echo /tmp/ansible-tmp-1446050161.27-256837595805154']
<127.0.0.1> PUT /var/folders/x1/dyrdksh50tj0z2szv3zx_9rc0000gq/T/tmpMYjnXz TO /tmp/ansible-tmp-1446050161.27-256837595805154/setup
<127.0.0.1> EXEC ['/bin/sh', '-c', 'chmod a+r /tmp/ansible-tmp-1446050161.27-256837595805154/setup']
<127.0.0.1> EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo via ansible, key=rqphpqfpcbsifqtnwflmmlmpwrcnkpqe] password: " -u robe /bin/sh -c '"'"'echo BECOME-SUCCESS-rqphpqfpcbsifqtnwflmmlmpwrcnkpqe; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1446050161.27-256837595805154/setup'"'"''
<127.0.0.1> EXEC ['/bin/sh', '-c', 'rm -rf /tmp/ansible-tmp-1446050161.27-256837595805154/ >/dev/null 2>&1']
ok: [127.0.0.1]
TASK: [docker-tool-box | Updating the /etc/hosts] *****************************
<127.0.0.1> REMOTE_MODULE lineinfile dest=/etc/hosts line="192.168.99.100 ptxrt.com"
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p /tmp/ansible-tmp-1446050161.49-9492873099893 && chmod a+rx /tmp/ansible-tmp-1446050161.49-9492873099893 && echo /tmp/ansible-tmp-1446050161.49-9492873099893']
<127.0.0.1> PUT /var/folders/x1/dyrdksh50tj0z2szv3zx_9rc0000gq/T/tmpyLOGd6 TO /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile
<127.0.0.1> EXEC ['/bin/sh', '-c', u'chmod a+r /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile']
<127.0.0.1> EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo via ansible, key=nofwziqxytbhjwhluhtzdfcqclqjuypv] password: " -u robe /bin/sh -c '"'"'echo BECOME-SUCCESS-nofwziqxytbhjwhluhtzdfcqclqjuypv; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile'"'"''
<127.0.0.1> EXEC ['/bin/sh', '-c', 'rm -rf /tmp/ansible-tmp-1446050161.49-9492873099893/ >/dev/null 2>&1']
failed: [127.0.0.1] => (item=ptxrt.com) => {"failed": true, "item": "ptxrt.com"}
msg: The destination directory (/private/etc) is not writable by the current user.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/robe/develop-env.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
I don't understand why the error msg said:
msg: The destination directory (/private/etc) is not writable by the current user.
The correct directory should be /etc/hosts.
Any clue?
I am working on MacOS.
My playbook is:
- hosts: 127.0.0.1
connection: local
become: yes
become_method: sudo
become_user: "{{user}}"
roles:
- role-1
- role-2
I put the become_user by command line. So all my roles are running with become. And it still doesn't work.
On OSX the /etc/ folder is actually a symlink to the /private/etc/ folder - hence the error. (Ansible is just transparently following the symlink).
As for the error you're going to need to run the task with become: yes (sudo permissions) to be able to write to /etc/hosts/
Edit based on update and commments
To get the correct privileges to edit the hosts file you need to be root. Setting become: yes on the task is good enough for this for OSX as Ansible will default to sudo as the become method and root as the user.
To specify the sudo password you can do one of two things.
Use --ask-become-pass on the command line and Ansible will prompt you when it needs it
Use the ansible_become_pass variable on the group or host in the inventory file. E.g. localhost ansible_become_pass=batman
Note that the Ansible docs recommend against 2 and using 1 so as not to store your password in plain text.

Resources