I wrote sshfs ansible playbook to mount remote directory from server.
When i am executing the same command manually on shell it is working(remote directory contents are seen). But when i am trying with ansible playbook, the remote directory is not mounting as expected.
user_allow_other -> Added this line /etc/fuse.conf
Added the below lines: /etc/ssh/ssh_config
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
StrictHostKeyChecking no
With out these addition also, running manually it is working.
But ansible playbook it is not mounting the remote directory, but showing as playbook successful.
**fuse.yml**
---
- hosts: server
become: yes
tasks:
- name: Mount Media Directory
shell: echo root123 | sshfs -o password_stdin,reconnect,nonempty,allow_other,idmap=user stack#10.1.1.1:/home/stack /mnt/server1
root#stack-VirtualBox:~/playbook# ansible-playbook fusessh.yml -vvv
<10.1.1.1> ESTABLISH SSH CONNECTION FOR USER: stack
<10.1.1.1> SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/stack/.ssh/id_rsa"' -o User=stack -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/88bbb1646b 10.1.1.1 '/bin/sh -c '"'"'rm -f -r /tmp/stack/ansible/ansible-tmp-1568065019.557693-124891815649027/ > /dev/null 2>&1 && sleep 0'"'"''
<10.1.1.1> (0, b'', b'')
changed: [10.1.1.1] => {
"changed": true,
"cmd": "echo root123 | sshfs -o password_stdin,reconnect,nonempty,allow_other,idmap=user stack#10.1.1.1:/home/stack /mnt/server1",
"delta": "0:00:00.548757",
"end": "2019-09-09 15:37:00.579023",
"invocation": {
"module_args": {
"_raw_params": "echo root123 | sshfs -o password_stdin,reconnect,nonempty,allow_other,idmap=user stack#10.1.1.1:/home/stack /mnt/server1",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"rc": 0,
"start": "2019-09-09 15:37:00.030266",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************
10.1.1.1 : ok=2 changed=1 unreachable=0 failed=0
The sshfs action is performed on remote instead of locally. The reason why it is working manually because the sshs login action is performed on the local shell not on remote server. I modified your playbook by adding local_action. I have tested the same and it is working fine.
---
- hosts: server
become: yes
tasks:
- name: Mount Media Directory
local_action: shell echo root123 | sshfs -o password_stdin,reconnect,nonempty,allow_other,idmap=user stack#10.1.1.1:/home/stack /mnt/server1
Related
One of my ansible tasks is failing without an error trace.The only message was that there was a non-zero return code (caused due to set -e ?) on running the bitbucket diy backup script. This was running earlier but is failing now after making some changes (enabling imdsv2) on the ec2 bitbucket server.
It was recommended that I manually run the task on the host to see if I can get some output, but I find myself needing to recreate the entire directory of shell scripts on the host machine to do that.
So I have 2 questions -
Is there a better way to approach this?
How would I be able to run the task from host machine from the point of failure? Some tasks before it were successful, would ansible create a directory for running tasks? And I can continue there?
Sorry I am really new to ansible, and tried going over the docs, but couldn't find a way to debug this stuff properly.
The debug log on ansible doesn't give any helpful output -
<ip-addr> (1, b'\\r\\n{"changed": true, "stdout": "", "stderr": "", "rc": 22, "cmd": ["/apps/bitbucket/atlassian-bitbucket-diy-backup/bitbucket.diy-backup.sh"], "start": "2023-02-16 22:31:18.187412", "end": "2023-02-16 22:31:18.280648", "delta": "0:00:00.093236", "failed": true, "msg": "non-zero return code", "invocation": {"module_args": {"_raw_params": "/apps/bitbucket/atlassian-bitbucket-diy-backup/bitbucket.diy-backup.sh", "_uses_shell": false, "warn": false, "stdin_add_newline": true, "strip_empty_ends": true, "argv": null, "chdir": null, "executable": null, "creates": null, "removes": null, "stdin": null}}}\\r\\n', b'Shared connection to ip-addr closed.\\r\\n')
<ip-addr> Failed to connect to the host via ssh: Shared connection to ip-addr closed.
<ip-addr> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<ip-addr> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/runner/cp/72bdb94d8e"' ip-addr '/bin/sh -c '"'"'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1676547077.8146338-376-91399646558620/ > /dev/null 2>&1 && sleep 0'"'"''
<ip-addr> (0, b'', b'')
fatal: [ip-addr]: FAILED! => {
"changed": true,
"cmd": [
"/apps/bitbucket/atlassian-bitbucket-diy-backup/bitbucket.diy-backup.sh"
],
"delta": "0:00:00.093236",
"end": "2023-02-16 22:31:18.280648",
"invocation": {
"module_args": {
"_raw_params": "/apps/bitbucket/atlassian-bitbucket-diy-backup/bitbucket.diy-backup.sh",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 22,
"start": "2023-02-16 22:31:18.187412",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
PLAY RECAP *********************************************************************
ip-addr : ok=13 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So I tried manually running the script on the host machine but I'm not sure I did it right. I expected an error trace for the bitbucket diy backup script, added some echo statements to see where it went and doesn't output anything after this line (should ideally output debug)
set -e
echo "start" # manually added
SCRIPT_DIR=$(dirname "$0")
source "${SCRIPT_DIR}/utils.sh"
echo "debug" #manually added
I have two playbooks
Playbook1.yaml which installs dependencies as root user and is working as expected but playbook2 is giving errors. Can some one help look at it on why playbook2 is failing to run when majority of the code is same for both playbook 1 and 2?
Playbook1 yaml file
---
- name: Install Cognos Analytics
hosts: all
become_method: dzdo
become_user: root
become_flags: 'su -'
tasks:
- name: Install Cognos Analytics Dependencies
yum:
name:
- java-1.8.0-openjdk
- glibc.i686
- glibc.x86_64
- libstdc++.i686
- libstdc++.x86_64
- nspr.i686
- nspr.x86_64
- nss.i686
- nss.x86_64
Now Playbook2 yaml below is giving the following error when I try to run can someone help me on this ?
---
- name: Install Cognos Analytics
hosts: all
become_method: dzdo
become_user: root
become_flags: 'su -'
tasks:
- name: Installing Cognos Analytics
command: /apps/Softwares/ca_instl_lnxi38664_2.0.2003191.bin -f /apps/Softwares/cognosresponsefile.properties -i silent
args:
chdir: /apps/SilentInstall
Error Log:
TASK [Installing Cognos Analytics] **************************************************************************************************************************
task path: /etc/ansible/Cognos.yml:9
<10.x.x.x> ESTABLISH SSH CONNECTION FOR USER: jughead
<10.x.x.x> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e 10.x.x.x '/bin/sh -c '"'"'echo ~jughead && sleep 0'"'"''
<10.x.x.x> (0, '/home/jughead\n', '')
<10.x.x.x> ESTABLISH SSH CONNECTION FOR USER: jughead
<10.x.x.x> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e 10.x.x.x '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/jughead/.ansible/tmp `"&& mkdir "` echo /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020 `" && echo ansible-tmp-1624909377.53-12390-42703578539020="` echo /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020 `" ) && sleep 0'"'"''
<10.x.x.x> (0, 'ansible-tmp-1624909377.53-12390-42703578539020=/home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<10.x.x.x> PUT /root/.ansible/tmp/ansible-local-12346tPaVOe/tmpXlWUhD TO /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/AnsiballZ_command.py
<10.x.x.x> SSH: EXEC sshpass -d8 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e '[10.x.x.x]'
<10.x.x.x> (0, 'sftp> put /root/.ansible/tmp/ansible-local-12346tPaVOe/tmpXlWUhD /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/AnsiballZ_command.py\n', '')
<10.x.x.x> ESTABLISH SSH CONNECTION FOR USER: jughead
<10.x.x.x> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e 10.x.x.x '/bin/sh -c '"'"'chmod u+x /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/ /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/AnsiballZ_command.py && sleep 0'"'"''
<10.x.x.x> (0, '', '')
<10.x.x.x> ESTABLISH SSH CONNECTION FOR USER: jughead
<10.x.x.x> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e -tt 10.x.x.x '/bin/sh -c '"'"'/usr/bin/python /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/AnsiballZ_command.py && sleep 0'"'"''
<10.x.x.x> (1, '\r\n{"changed": true, "end": "2021-06-28 15:43:16.115767", "stdout": "", "cmd": ["/apps/Softwares/ca_instl_lnxi38664_2.0.2003191.bin", "-f", "/apps/Softwares/cognosresponsefile.properties", "-i", "silent"], "failed": true, "delta": "0:00:18.049758", "stderr": "", "rc": 255, "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": false, "strip_empty_ends": true, "_raw_params": "/apps/Softwares/ca_instl_lnxi38664_2.0.2003191.bin -f /apps/Softwares/cognosresponsefile.properties -i silent", "removes": null, "argv": null, "warn": true, "chdir": "/apps/SilentInstall", "stdin_add_newline": true, "stdin": null}}, "start": "2021-06-28 15:42:58.066009", "msg": "non-zero return code"}\r\n', 'Shared connection to 10.x.x.x closed.\r\n')
<10.x.x.x> Failed to connect to the host via ssh: Shared connection to 10.x.x.x closed.
<10.x.x.x> ESTABLISH SSH CONNECTION FOR USER: jughead
<10.x.x.x> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="jughead"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/a67e55b20e 10.x.x.x '/bin/sh -c '"'"'rm -f -r /home/jughead/.ansible/tmp/ansible-tmp-1624909377.53-12390-42703578539020/ > /dev/null 2>&1 && sleep 0'"'"''
<10.x.x.x> (0, '', '')
fatal: [10.x.x.x]: FAILED! => {
"changed": true,
"cmd": [
"/apps/Softwares/ca_instl_lnxi38664_2.0.2003191.bin",
"-f",
"/apps/Softwares/cognosresponsefile.properties",
"-i",
"silent"
],
"delta": "0:00:18.049758",
"end": "2021-06-28 15:43:16.115767",
"invocation": {
"module_args": {
"_raw_params": "/apps/Softwares/ca_instl_lnxi38664_2.0.2003191.bin -f /apps/Softwares/cognosresponsefile.properties -i silent",
"_uses_shell": false,
"argv": null,
"chdir": "/apps/SilentInstall",
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 255,
"start": "2021-06-28 15:42:58.066009",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
Seeing as the only thing that makes sense is that Playbook 1 somehow affects Playbook 2, I searched for your issue and found this: https://access.redhat.com/solutions/475513
And this:
https://support.oracle.com/knowledge/Oracle%20Database%20Products/2543805_1.html
Unfortunately none of these offer any solution, but just hints. Assuming that the installation of NSS is the issue, you could try to find out why the sshd might be failing from its logs.
I am working on backups for my server. I am using sshfs for this. When wanting to back up a folder the backup server asks for a password. This is what my task (handler) looks like:
- name: Mount backup folder
become: yes
expect:
command: "sshfs -o allow_other,default_permissions {{ backup_server.user }}#{{ backup_server.host }}:/ /mnt/backup"
echo: yes
responses:
(.*)password(.*): "{{ backup_server.password }}"
(.*)Are you sure you want to continue(.*): "yes"
listen: mount-backup-folder
It runs and produces this output:
changed: [prod1.com] => {
"changed": true,
"cmd": "sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup",
"delta": "0:00:00.455753",
"end": "2021-01-26 14:57:34.482440",
"invocation": {
"module_args": {
"chdir": null,
"command": "sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup",
"creates": null,
"echo": true,
"removes": null,
"responses": {
"(.*)Are you sure you want to continue(.*)": "yes",
"(.*)password(.*)": "password"
},
"timeout": 30
}
},
"rc": 0,
"start": "2021-01-26 14:57:34.026687",
"stdout": "user#hostname.com's password: ",
"stdout_lines": [
"user#hostname.com's password: "
]
}
But when I go to the server the folder is not synced with the backup server. BUT when I run the command manually:
sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup
The backup DOES work. Does anybody know how this is possible?
I suspect sshfs was killed by SIGHUP. I know nothing about Ansible so don't know if it has the official way to ignore SIGHUP. As a workaround you can write like this:
expect:
command: bash -c "trap '' HUP; sshfs -o ..."
I installed sshfs and verified this bash -c "trap ..." workaround with Expect (spawn -ignore HUP ...) and sexpect (spawn -nohup ...). I believe it'd also work with Ansible (seems like its expect module uses Python's pexpect).
I have deployed 2 Ubuntu 18.04 Machine in virtualbox running in Windows 10
Ansible 2.9.6 is installed in controller host.
Now I am stuck in trying to connect controller host to other host.
My etc/ansible/hosts are defined as below. localhost is controller, Staging is another ubuntu in virtualbox.
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
staging ansible_host=10.0.2.15 ansible_ssh_pass=1234 ansible_ssh_user=dinesh ansible_python_interpreter=/usr/bin/python3
My ansible.cfg are defined as below
[defaults]
host_key_checking = false
[defaults]
transport = ssh
[ssh_connection]
#ssh_args = -o ForwardAgent=yes
My playbook cloning.yml is below. I am just trying to clone a public git repo.
First I give the permission for the folder where I am trying to clone. as the error stated permission denied. But I think it is not the actual solution
---
- hosts: staging
become: true
become_user: dinesh
gather_facts: no
tasks:
- name: Change file permission to liberal
command: /usr/bin/find . -type f -exec chmod 777 -- {} +
args:
chdir: /usr/local/src/
register: output
- debug: var=output.stdout_lines
- name: pull from git
git:
repo: https://github.com/fossology/fossology.git
dest: /usr/local/src/fossology
update: yes
version: master
force: yes
- name: git status
command: /usr/bin/git rev-parse HEAD
args:
chdir: /usr/local/src/fossology
register: output
- debug: var=output.stdout_lines
- name: start the docker
docker_compose:
project_src: usr/local/src/fossology
state: present
Error part
TASK [pull from git] ***************************************************************************************************************************************************************************************
task path: /home/dinesh/Documents/fossy_initial_setup.yml:13
<10.0.2.15> ESTABLISH SSH CONNECTION FOR USER: dinesh
<10.0.2.15> SSH: EXEC sshpass -d10 ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 10.0.2.15 '/bin/sh -c '"'"'echo ~dinesh && sleep 0'"'"''
<10.0.2.15> (0, '/home/dinesh\n', '')
<10.0.2.15> ESTABLISH SSH CONNECTION FOR USER: dinesh
<10.0.2.15> SSH: EXEC sshpass -d10 ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 10.0.2.15 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947 `" && echo ansible-tmp-1584285396.54-187145294823947="` echo /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947 `" ) && sleep 0'"'"''
<10.0.2.15> (0, 'ansible-tmp-1584285396.54-187145294823947=/home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947\n', '')
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/source_control/git.py
<10.0.2.15> PUT /home/dinesh/.ansible/tmp/ansible-local-21868PFeQ7k/tmpssQrbR TO /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/AnsiballZ_git.py
<10.0.2.15> SSH: EXEC sshpass -d10 sftp -o BatchMode=no -b - -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 '[10.0.2.15]'
<10.0.2.15> (0, 'sftp> put /home/dinesh/.ansible/tmp/ansible-local-21868PFeQ7k/tmpssQrbR /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/AnsiballZ_git.py\n', '')
<10.0.2.15> ESTABLISH SSH CONNECTION FOR USER: dinesh
<10.0.2.15> SSH: EXEC sshpass -d10 ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 10.0.2.15 '/bin/sh -c '"'"'chmod u+x /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/ /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/AnsiballZ_git.py && sleep 0'"'"''
<10.0.2.15> (0, '', '')
<10.0.2.15> ESTABLISH SSH CONNECTION FOR USER: dinesh
<10.0.2.15> SSH: EXEC sshpass -d10 ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 -tt 10.0.2.15 '/bin/sh -c '"'"'/usr/bin/python3 /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/AnsiballZ_git.py && sleep 0'"'"''
<10.0.2.15> (1, '\r\n{"cmd": "/usr/bin/git clone --origin origin https://github.com/fossology/fossology.git /usr/local/src/fossology", "rc": 1, "stdout": "", "stderr": "Cloning into \'/usr/local/src/fossology\'...\\n/usr/local/src/fossology/.git: Permission denied\\n", "msg": "Cloning into \'/usr/local/src/fossology\'...\\n/usr/local/src/fossology/.git: Permission denied", "failed": true, "invocation": {"module_args": {"force": true, "dest": "/usr/local/src/fossology", "update": true, "repo": "https://github.com/fossology/fossology.git", "version": "master", "remote": "origin", "clone": true, "verify_commit": false, "gpg_whitelist": [], "accept_hostkey": false, "bare": false, "recursive": true, "track_submodules": false, "refspec": null, "reference": null, "depth": null, "key_file": null, "ssh_opts": null, "executable": null, "umask": null, "archive": null, "separate_git_dir": null}}}\r\n', 'Connection to 10.0.2.15 closed.\r\n')
<10.0.2.15> Failed to connect to the host via ssh: Connection to 10.0.2.15 closed.
<10.0.2.15> ESTABLISH SSH CONNECTION FOR USER: dinesh
<10.0.2.15> SSH: EXEC sshpass -d10 ssh -o ForwardAgent=yes -o StrictHostKeyChecking=no -o 'User="dinesh"' -o ConnectTimeout=10 10.0.2.15 '/bin/sh -c '"'"'rm -f -r /home/dinesh/.ansible/tmp/ansible-tmp-1584285396.54-187145294823947/ > /dev/null 2>&1 && sleep 0'"'"''
<10.0.2.15> (0, '', '')
fatal: [fossology_staging]: FAILED! => {
"changed": false,
"cmd": "/usr/bin/git clone --origin origin https://github.com/fossology/fossology.git /usr/local/src/fossology",
"invocation": {
"module_args": {
"accept_hostkey": false,
"archive": null,
"bare": false,
"clone": true,
"depth": null,
"dest": "/usr/local/src/fossology",
"executable": null,
"force": true,
"gpg_whitelist": [],
"key_file": null,
"recursive": true,
"reference": null,
"refspec": null,
"remote": "origin",
"repo": "https://github.com/fossology/fossology.git",
"separate_git_dir": null,
"ssh_opts": null,
"track_submodules": false,
"umask": null,
"update": true,
"verify_commit": false,
"version": "master"
}
},
"msg": "Cloning into '/usr/local/src/fossology'...\n/usr/local/src/fossology/.git: Permission denied",
"rc": 1,
"stderr": "Cloning into '/usr/local/src/fossology'...\n/usr/local/src/fossology/.git: Permission denied\n",
"stderr_lines": [
"Cloning into '/usr/local/src/fossology'...",
"/usr/local/src/fossology/.git: Permission denied"
],
"stdout": "",
"stdout_lines": []
}
PLAY RECAP *************************************************************************************************************************************************************************************************
fossology_staging : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Either the connection is not happening or connection via ssh is wrong. But when I do
dinesh#dinesh-VirtualBox:~/Documents$ ansible staging -m ping
staging | SUCCESS => {
"changed": false,
"ping": "pong"
}
dinesh#dinesh-VirtualBox:~/Documents$
What am I doing wrong here?
I am not able to execute shell script remotely in Ansible. However, there are previous tasks in the same role (filebeat) that are executed in remote server successfully. I am running the following in local server 172.28.28.6 server to install and run filebeat in remote server 172.28.28.81
Playbook: install-filebeat.yml:
hosts: filebeat-servers
remote_user: wwwadm
sudo: yes
roles:
- { role: /vagrant/roles/filebeat}
Role filebeat: main.yml:
---
# tasks file for filebeat
- name: "Extract Filebeat"
unarchive:
src: "{{ tmp_artifact_cache }}/{{ filebeat_archive }}"
remote_src: yes
dest: "{{ filebeat_root_dir }}"
extra_opts: ['--transform=s,/*[^/]*,{{ filebeat_ver }},i', '--show-stored-names']
become: yes
become_user: "{{ filebeat_install_as }}"
when: not ansible_check_mode
tags: [ 'filebeat' ]
- name: Configure Filebeat
template:
src: "filebeat.yml.j2"
dest: "{{ filebeat_install_dir }}/filebeat.yml"
mode: 0775
become: yes
become_user: "{{ filebeat_install_as }}"
tags: [ 'filebeat' ]
- name: 'Filebeat startup script'
template:
src: "startup.sh.j2"
dest: "{{ filebeat_install_dir }}/bin/startup.sh"
mode: 0755
become: yes
become_user: "{{ filebeat_install_as }}"
tags: [ 'filebeat', 'start' ]
#This one does not get executed at all:
- name: "Start Filebeat"
# shell: "{{ filebeat_install_dir }}/bin/startup.sh"
command: "sh {{ filebeat_install_dir }}/bin/startup.sh"
become: yes
become_user: "{{ filebeat_install_as }}"
defaults:
# defaults file for filebeat
filebeat_ver: "6.6.0"
filebeat_archive: "filebeat-{{ filebeat_ver }}-linux-x86_64.tar.gz"
filebeat_archive_checksum : "sha1:d38d8fea7e9915582720280eb0118b7d92569b23"
filebeat_url: "https://artifacts.elastic.co/downloads/beats/filebeat/{{ filebeat_archive }}"
filebeat_root_dir: "{{ apps_home }}/filebeat"
filebeat_data_dir: "{{ apps_data }}/filebeat"
filebeat_log_dir: "{{ apps_logs }}/filebeat"
filebeat_install_dir: "{{ filebeat_root_dir }}/{{ filebeat_ver }}"
filebeat_cert_dir: "/etc/pki/tls/certs"
filebeat_ssl_certificate_file: "logstash.crt"
filebeat_ssl_key_file: "logstash.key"
filebeat_install_as: "{{ install_user | default('wwwadm') }}"
filebeat_set_as_current: yes
filebeat_force_clean_install: no
filebeat_java_home: "{{ sw_home }}/jdk"
inventory/local/hosts:
localhost ansible_connection=local
[filebeat-servers]
172.28.28.81 ansible_user=vagrant ansible_connection=ssh
Filebeat is installed and changes are done in the remote server except the last step which is the execution of shell script
When running the playbook as follows:
ansible-playbook -i /vagrant/inventory/local install-filebeat.yml -vvv
Getting the following output related to the shell execution:
TASK [/vagrant/roles/filebeat : Start Filebeat] ***************************************************************************************************************************************************************
task path: /vagrant/roles/filebeat/tasks/main.yml:184
<172.28.28.81> ESTABLISH SSH CONNECTION FOR USER: vagrant
<172.28.28.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 172.28.28.81 '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"''
<172.28.28.81> (0, '/home/vagrant\n', '')
<172.28.28.81> ESTABLISH SSH CONNECTION FOR USER: vagrant
<172.28.28.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 172.28.28.81 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /var/tmp/ansible-tmp-1550178583.24-35955954120606 `" && echo ansible-tmp-1550178583.24-35955954120606="` echo /var/tmp/ansible-tmp-1550178583.24-35955954120606 `" ) && sleep 0'"'"''
<172.28.28.81> (0, 'ansible-tmp-1550178583.24-35955954120606=/var/tmp/ansible-tmp-1550178583.24-35955954120606\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<172.28.28.81> PUT /home/vagrant/.ansible/tmp/ansible-local-13658UX7cBC/tmpFzf2Ll TO /var/tmp/ansible-tmp-1550178583.24-35955954120606/AnsiballZ_command.py
<172.28.28.81> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 '[172.28.28.81]'
<172.28.28.81> (0, 'sftp> put /home/vagrant/.ansible/tmp/ansible-local-13658UX7cBC/tmpFzf2Ll /var/tmp/ansible-tmp-1550178583.24-35955954120606/AnsiballZ_command.py\n', '')
<172.28.28.81> ESTABLISH SSH CONNECTION FOR USER: vagrant
<172.28.28.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 172.28.28.81 '/bin/sh -c '"'"'setfacl -m u:wwwsvr:r-x /var/tmp/ansible-tmp-1550178583.24-35955954120606/ /var/tmp/ansible-tmp-1550178583.24-35955954120606/AnsiballZ_command.py && sleep 0'"'"''
<172.28.28.81> (0, '', '')
<172.28.28.81> ESTABLISH SSH CONNECTION FOR USER: vagrant
<172.28.28.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 -tt 172.28.28.81 '/bin/sh -c '"'"'sudo -H -S -n -u wwwsvr /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ntzchfzqggiteuqwzpiurlloddbdhevp; /usr/bin/python /var/tmp/ansible-tmp-1550178583.24-35955954120606/AnsiballZ_command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<172.28.28.81> (0, '\r\n{"changed": true, "end": "2019-02-14 13:09:44.800191", "stdout": "Starting Filebeat", "cmd": ["sh", "/apps_ux/filebeat/6.6.0/bin/startup.sh"], "rc": 0, "start": "2019-02-14 13:09:43.792122", "stderr": "+ export JAVA_HOME=/sw_ux/jdk\\n+ JAVA_HOME=/sw_ux/jdk\\n+ echo \'Starting Filebeat\'\\n+ /apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /apps_data/logs/filebeat", "delta": "0:00:01.008069", "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "sh /apps_ux/filebeat/6.6.0/bin/startup.sh", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}}\r\n', 'Shared connection to 172.28.28.81 closed.\r\n')
<172.28.28.81> ESTABLISH SSH CONNECTION FOR USER: vagrant
<172.28.28.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/f66f05c055 172.28.28.81 '/bin/sh -c '"'"'rm -f -r /var/tmp/ansible-tmp-1550178583.24-35955954120606/ > /dev/null 2>&1 && sleep 0'"'"''
<172.28.28.81> (0, '', '')
changed: [172.28.28.81] => {
"changed": true,
"cmd": [
"sh",
"/apps_ux/filebeat/6.6.0/bin/startup.sh"
],
"delta": "0:00:01.008069",
"end": "2019-02-14 13:09:44.800191",
"invocation": {
"module_args": {
"_raw_params": "sh /apps_ux/filebeat/6.6.0/bin/startup.sh",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"rc": 0,
"start": "2019-02-14 13:09:43.792122",
"stderr": "+ export JAVA_HOME=/sw_ux/jdk\n+ JAVA_HOME=/sw_ux/jdk\n+ echo 'Starting Filebeat'\n+ /apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /apps_data/logs/filebeat",
"stderr_lines": [
"+ export JAVA_HOME=/sw_ux/jdk",
"+ JAVA_HOME=/sw_ux/jdk",
"+ echo 'Starting Filebeat'",
"+ /apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /apps_data/logs/filebeat"
],
"stdout": "Starting Filebeat",
"stdout_lines": [
"Starting Filebeat"
]
}
META: ran handlers
META: ran handlers
PLAY RECAP ****************************************************************************************************************************************************************************************************
172.28.28.81 : ok=18 changed=7 unreachable=0 failed=0
On remote server:
[6.6.0:vagrant]$ cd bin
[bin:vagrant]$ ls -ltr
total 36068
-rwxr-xr-x. 1 wwwadm wwwadm 36927014 Jan 24 02:30 filebeat
-rwxr-xr-x. 1 wwwadm wwwadm 478 Feb 14 12:54 startup.sh
[bin:vagrant]$ pwd
/apps_ux/filebeat/6.6.0/bin
[bin:vagrant]$ more startup.sh
#!/usr/bin/env bash
set -x
export JAVA_HOME="/sw_ux/jdk"
#To save pid into a file is an open feature: https://github.com/elastic/logstash/issues/3577. There is no -p flag for filebeat to save the pid and then kill it.
echo 'Starting Filebeat'
/apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /a
pps_data/logs/filebeat &
No process running found by executing ps command
[bin:vagrant]$ ps -fea | grep filebeat | grep -v grep
However, if I connect to the remote server, I am able to run filebeat by executing the script with the user wwwadm and filebeat starts successfully:
[bin:wwwadm]$ pwd
/apps_ux/filebeat/6.6.0/bin
[bin:wwwadm]$ id
uid=778(wwwadm) gid=778(wwwadm) groups=778(wwwadm) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[bin:wwwadm]$ ./startup.sh
+ export JAVA_HOME=/sw_ux/jdk
+ JAVA_HOME=/sw_ux/jdk
+ echo 'Starting Filebeat'
Starting Filebeat
+ /apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /apps_data/logs/filebeat
[bin:wwwadm]$ ps -fea | grep filebeat | grep -v grep
wwwadm 19160 1 0 15:12 pts/0 00:00:00 /apps_ux/filebeat/6.6.0/bin/filebeat -c /apps_ux/filebeat/6.6.0/config/filebeat.yml -path.home /apps_ux/filebeat/6.6.0 -path.config /apps_ux/filebeat/6.6.0/config -path.data /apps_data/filebeat -path.logs /apps_data/logs/filebeat
Thanks
You should use nohup to run it in background.
because when ansible exits, all processes associated with the session
will be terminated. To avoid this you should use nohup.
Correct command is:
- name: "Start Filebeat"
# shell: "{{ filebeat_install_dir }}/bin/startup.sh"
command: "nohup sh {{ filebeat_install_dir }}/bin/startup.sh &>> startup.log &"
become: yes
become_user: "{{ filebeat_install_as }}"
You have to use the disown built-in command to inform the shell that it should not kill background processes when you disconnect; you can also use nohup for that same effect
Having said that, you are for sure solving the wrong problem, because if^H^Hwhen filebeat falls over, there is nothing monitoring that service to keep it alive. You'll want to use systemd (or its equivalent on your system) to ensure that filebeat stays running, and by using the mechanism designed for that stuff, you side-step all the "disown or nohup" business that causes you to ask S.O. questions.